From patchwork Fri May 15 10:58:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551197 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A7EA913 for ; Fri, 15 May 2020 11:00:17 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DCF282074D for ; Fri, 15 May 2020 11:00:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="iEmwG81x"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="XH/o0H7o" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DCF282074D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=4xDsIrjwxcmnx1A9cu8nN5Jz/4TokG8foLS6+o7yeo4=; b=iEmwG81xEtgCjX RAwmqBOfOepyUW10QcrV/zgSHCVMuvEoFKcAolCsz33QHnWEpKx5vnczz76tMMys7OTAqzIEdL4uO NRdMSpuN9zspfM3rZMh8FkIWjVSfeNqBpDmNxQAxb02h96b5TOV8Ve2zxR8QYS8MvP/1B8hJzU+Qh A7zUhMiObRYSz3KyPbbqvfJRIEFTlkC72nZPwHtbH9ndvQvzpyMmhkhzo+DQEE5MXJdefT53xf64I CYXiUUOfK9ZvIgHqDD45CsQAOt4P7R1NqbPNM4VRGCqCoa9bHz3mtfWKqJm5gmEcUdf2fObRTjIme j9/Rkdjuz7XWTfzPC3bA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY4P-0002X6-Cy; Fri, 15 May 2020 11:00:01 +0000 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3Q-0001hc-7V for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:02 +0000 Received: by mail-wm1-x343.google.com with SMTP id k12so1866561wmj.3 for ; Fri, 15 May 2020 03:58:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n0WmKLG9AhFHe1sx7/xWBbs910NkW2ou4PIf9/zvdZk=; b=XH/o0H7oM78CWsMava6ImC+pDI6hF6ua9+7a+QgMxesqt0yvnk3qtmKK2Kdbnpgwn5 syN/+CbKjm6nWhTD4kIjUoLCKKDl0DlXsbkO5IvT0Ng63b17fu0k70ArZyIFneeBaLRz raNSACI6WrwfkYHp+NLgayYvdjKyhhOPIG/9EnB9PD4u3xxQQz7jOb2yARtaUaugtPMx f2om0EoHE2cV01n/1IlqoPq1I3NpnCfjdh4Irg94n2vsLyaQthwX7TH58GQap8aIibHq HX/YAdJj4EJ54cR/YtNxaqTu/khLe/vqabq3I08duZ21GmKhMeAnn0qbb3oOZpPG1tZJ FCcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n0WmKLG9AhFHe1sx7/xWBbs910NkW2ou4PIf9/zvdZk=; b=jhpqWBq/jO6EVeaaFMK4Bl8+09LXrYqJnIbDCSQ83dboAnHhDOatNH4bU0QoJzWT4e UAB4BzEQtHLk8FGLVwTkV6PXjD+Ji58nRoTyA3Unjav4Nm4VlywebMSEpNE2YIgfufw3 LJ18z/3WHY+nOmPv3+X5qVhfQZlLWzTaHsFutsYbJROZf+TdQj4HlM8x3UbNFzuMGkm1 ZBB/5eMfk6G7hcALVKorM1oEYq02L6ddNPh8gQrNYGvfQUd1KShpSpMZFnjeE7xqFo9P AsiXjp4OFLwrn7dcI5CgPed1A1mVns0cbq1fh0TKfkNzMLXh9fPvfHbJ8aAIPO4dMiua o77Q== X-Gm-Message-State: AOAM530BwmYjUrTDnLGAL2U/uPi78xZiVROMO8lOJdFOShcTdCskYCTh EheqLTJRFBds2nsSFgs497BbXg== X-Google-Smtp-Source: ABdhPJyFi+//KaD2VusD1LXgNbdSGbxHqdar1dGWec/TaEPF0FrbkPG38tGqXW8/K5EnBa5mk9BeqA== X-Received: by 2002:a7b:ca53:: with SMTP id m19mr3574570wml.182.1589540335813; Fri, 15 May 2020 03:58:55 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id x184sm3090403wmg.38.2020.05.15.03.58.54 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:58:55 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 01/14] arm64: kvm: Fix symbol dependency in __hyp_call_panic_nvhe Date: Fri, 15 May 2020 11:58:28 +0100 Message-Id: <20200515105841.73532-2-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035900_452081_34FD98A3 X-CRM114-Status: GOOD ( 14.07 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:343 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org __hyp_call_panic_nvhe contains inline assembly which did not declare its dependency on the __hyp_panic_string symbol. The static-declared string has previously been kept alive because of a use in __hyp_call_panic_vhe. Fix this in preparation for separating the source files between VHE and nVHE when the two users land in two different compilation units. The static variable otherwise gets dropped when compiling the nVHE source file, causing an undefined symbol linker error later. Signed-off-by: David Brazdil --- arch/arm64/kvm/hyp/switch.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 8a1e81a400e0..7a7c08029d81 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -836,7 +836,7 @@ static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par, * making sure it is a kernel address and not a PC-relative * reference. */ - asm volatile("ldr %0, =__hyp_panic_string" : "=r" (str_va)); + asm volatile("ldr %0, =%1" : "=r" (str_va) : "S" (__hyp_panic_string)); __hyp_do_panic(str_va, spsr, elr, From patchwork Fri May 15 10:58:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551195 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01784697 for ; Fri, 15 May 2020 10:59:42 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CE82420709 for ; Fri, 15 May 2020 10:59:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qaDhEQJz"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="u2DytUW7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE82420709 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kLdhjPjXBZTCRNr9u0B/4Kc0LOeYwtcISJDtjnhubDc=; b=qaDhEQJz8zcMRr 3sKJDqI+YOMNwlA5wi0JYlXRVOigklZfL7jyDtzYPUIg+GQxGqVhTLbsz9JsgGX0Wq9N8ZFdaAIty PTxvLhqpGOkdbVu9vercXGC5b+MREKEczLJULBwvJNu/NEpEHNZjx5O2QL0lMXhoLhaKpS7tdQWbj ELKXKaWXfP8Hq1d8klcvW5zazyK0ZCfBT1D/huAgqw7HBttI5qG5sbktF0S6r0d5DbJMO1sEYQO4W F7YEFtcj3byClGqRAz1A3qDFJpNl72AcZR4Q6NNrsVVYDhebPbNXJfp40t3wmVXXGuFLneHORDUlr PnuHxs7N/9YPzvZ2Ep0A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY42-0002HE-63; Fri, 15 May 2020 10:59:38 +0000 Received: from mail-wr1-x441.google.com ([2a00:1450:4864:20::441]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3P-0001jg-MQ for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:01 +0000 Received: by mail-wr1-x441.google.com with SMTP id s8so2992684wrt.9 for ; Fri, 15 May 2020 03:58:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ai6Fwl8UyJm6B87jvYHBuJcbjLJDaWpZZPVaPcXdfLA=; b=u2DytUW7gTaWjKgKY0D29Drbg//IklH60EMwY3UFqgWODthsP3akJrIYzhlnI5FJKv RNUVgMc+XbbpJVoDkByhNUhNdw9T7bxR6ZjOn0hrivmRGUWB8qbGL1mfRJ18I/0tACbJ xeAuqFwIfPJp8vLLJXuTVyl27cwnBPE91ENGsEqjeMFv1vVmgE5F2C8Wh9+EkaIbSqar cYFpZWx5UpuZwEqqAV5GKNOYKQdjT52uS1qNza/RpJ+CZUQEyO6g/W+JJVKg6K4E0vIu Ql55gaM8m+fAEIqi1nqzANNnf0liHhnaKFjWRwDrVGkGM6t52MBlqnsp4q+zlQsDI4kQ 8Tqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ai6Fwl8UyJm6B87jvYHBuJcbjLJDaWpZZPVaPcXdfLA=; b=l7RDEdVe62ThWFovCNYWChx8T1AUlaVMnl0699oBM6CqdMowynF6t4lazCiu2uwM/a lt1vgxTPgB87N2pt6fyYCSX/zN10gI4uHyKzpOYY0/j4UbRhnujXZlXzUNmklgEZnNNW 1a7Y8ubu67p1ji/PCpldGOaD6n6RPadqsoV9N99BchYTQPkbmTVcyE7MYhAAa16MREFP zS1D/KeChbiGDwyD0v+UeJCBB4WTcis1vZtD13WxjRdLgOvmC+XAvjbBFS3HsdxGNVyx /Awp9exv8zIlhtlPOfMbD+/zhMJYCs9hr5qk5KWlJzWaMkyBq+LOj3Io6kkwNFuqYGC1 e2Hg== X-Gm-Message-State: AOAM532MU183M0Isr0hSV/29wOvSdy9hHppGjmj7WGAkPh4cQ50jdj4o 2DUwUXal/agtz8WVA1ZQVsH7+te8ip0= X-Google-Smtp-Source: ABdhPJxJSLJpRVMm0dV2T5Dk4zybjMjD8L0/aFOEv7m2OekpRetlA0SYTkbeLI0O0Ger9pleD5J2kA== X-Received: by 2002:adf:f84c:: with SMTP id d12mr3627382wrq.248.1589540337673; Fri, 15 May 2020 03:58:57 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id t7sm3106188wrq.39.2020.05.15.03.58.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:58:56 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 02/14] arm64: kvm: Move __smccc_workaround_1_smc to .rodata Date: Fri, 15 May 2020 11:58:29 +0100 Message-Id: <20200515105841.73532-3-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035859_797335_65A5A6EE X-CRM114-Status: GOOD ( 15.29 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:441 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This snippet of assembly is used by cpu_errata.c to overwrite parts of KVM hyp vector. Move it to its own source file and change its ELF section to .rodata. Signed-off-by: David Brazdil --- arch/arm64/kvm/hyp/Makefile | 1 + arch/arm64/kvm/hyp/hyp-entry.S | 16 ---------------- arch/arm64/kvm/hyp/smccc_wa.S | 30 ++++++++++++++++++++++++++++++ 3 files changed, 31 insertions(+), 16 deletions(-) create mode 100644 arch/arm64/kvm/hyp/smccc_wa.S diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 8c9880783839..5d8357ddc234 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -7,6 +7,7 @@ ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) obj-$(CONFIG_KVM) += hyp.o +obj-$(CONFIG_KVM_INDIRECT_VECTORS) += smccc_wa.o hyp-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ debug-sr.o entry.o switch.o fpsimd.o tlb.o hyp-entry.o diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index c2a13ab3c471..65ff99a7e02d 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -319,20 +319,4 @@ SYM_CODE_START(__bp_harden_hyp_vecs) 1: .org __bp_harden_hyp_vecs + __BP_HARDEN_HYP_VECS_SZ .org 1b SYM_CODE_END(__bp_harden_hyp_vecs) - - .popsection - -SYM_CODE_START(__smccc_workaround_1_smc) - esb - sub sp, sp, #(8 * 4) - stp x2, x3, [sp, #(8 * 0)] - stp x0, x1, [sp, #(8 * 2)] - mov w0, #ARM_SMCCC_ARCH_WORKAROUND_1 - smc #0 - ldp x2, x3, [sp, #(8 * 0)] - ldp x0, x1, [sp, #(8 * 2)] - add sp, sp, #(8 * 4) -1: .org __smccc_workaround_1_smc + __SMCCC_WORKAROUND_1_SMC_SZ - .org 1b -SYM_CODE_END(__smccc_workaround_1_smc) #endif diff --git a/arch/arm64/kvm/hyp/smccc_wa.S b/arch/arm64/kvm/hyp/smccc_wa.S new file mode 100644 index 000000000000..aa25b5428e77 --- /dev/null +++ b/arch/arm64/kvm/hyp/smccc_wa.S @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2015-2018 - ARM Ltd + * Author: Marc Zyngier + */ + +#include + +#include +#include + + /* + * This is not executed directly and is instead copied into the vectors + * by install_bp_hardening_cb(). + */ + .data + .pushsection .rodata + .global __smccc_workaround_1_smc +__smccc_workaround_1_smc: + esb + sub sp, sp, #(8 * 4) + stp x2, x3, [sp, #(8 * 0)] + stp x0, x1, [sp, #(8 * 2)] + mov w0, #ARM_SMCCC_ARCH_WORKAROUND_1 + smc #0 + ldp x2, x3, [sp, #(8 * 0)] + ldp x0, x1, [sp, #(8 * 2)] + add sp, sp, #(8 * 4) +1: .org __smccc_workaround_1_smc + __SMCCC_WORKAROUND_1_SMC_SZ + .org 1b From patchwork Fri May 15 10:58:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551199 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4FA5E618 for ; Fri, 15 May 2020 11:00:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2C9C22074D for ; Fri, 15 May 2020 11:00:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="QbhUM0SL"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="Oma0VpTc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2C9C22074D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/YqXLPs2VK6Qq18/4CXeSD1YXN1/Fm91knCw+Yiez4A=; b=QbhUM0SL6NLwkC Ms2KW/HvTejcdzBSivKZQmYWAPQcQYdnuKKFhHH1l+oKyGh+F/Ev013KKQBUFVMFuSOJd/2gM1PrX AMgU1LMQ18YtEno1XuGOIiprkt1DvXuBzTu8JQ+VwEC82VwKmAlWsy6YYxSKcdciwxSDlQRCtaksx f8vMYSXr6X5Kgi5MBYVPdn5huUgoGgjFaaIutAxCKpbcuI0czyS/Dxj592TBegQb3MmA9aYCXyJZU 6d2AIM71waF0EtfRW/RL+dcJnqMK/HTxEoQ2XrBHAmnZJ+qn+QgHi/gWTLlxTlbanZslkef7LeFaL XIVR5jydiEHh8R+Do0EQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY4y-0005oc-Q5; Fri, 15 May 2020 11:00:36 +0000 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3R-0001lo-68 for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:03 +0000 Received: by mail-wr1-x444.google.com with SMTP id 50so2979301wrc.11 for ; Fri, 15 May 2020 03:59:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NjCPJFyitkarvdcIAJOPK9PQxgRJtbWF1KhO25l4eJg=; b=Oma0VpTczioC+4xeRB75kaYXI2lIGZ/yWMfNQF1XwTrYqF+DyTsmW6MZNPqtWYQtaH yDurhLMFaF6JChKVKItlTc7HqbY6bSUlBg7eAqJNR9k84NxwVD3rOTryV8+aEJpqB8ab X065e0TX+rpDWuE7bKAe2zvyz7QtT/+KqDHMSkjshY+MN9tSCbLn0fvoNC8W9SPlQHdF JHi5HLpuNCKsG4VxuQZ0tn9p7U6/m3T2dfsKHK92ai1BBh2f2upZ1CmHnbHsGnqy8wgo blBnIfCM97l/BpfiFpcJMTxU/G/IjMvYLk2TnkM+cZbFYwbZqDkD7ufOmNMMEEu/2TNN LI+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NjCPJFyitkarvdcIAJOPK9PQxgRJtbWF1KhO25l4eJg=; b=eCku2izlxFeUoooiOnp6XAqLtfFlGL/U8BFgBwqhpp6C6GTZgJ3TLgYPrtqx3su5Yi UBSfU5Ea8TxYkpaRzBTLySyhdL4XNUcxBzl20IhOPMhWCQVcZNyO8p+09QkHg/5vrf7P D1FPWE9OUXrr7laD71T6PwpsExUhFzL9EwcgB7eN/t6pTPvpcQ+OblYAESOz5keaUkq1 cl0Nvpj5OcN/Dgl7E/rVVNkvwGx/t2fLmf3zIp0n7bM1FzTnM19NNbo4Suje6K3Aeklg 54dpboxiMb5VLNLn8mMyH3tusf2tsEk/UfM7o7CWUgjNcMN3iV1QKRQrdlZqbCcQ2hcz Ql3g== X-Gm-Message-State: AOAM532kW+lQ4H8QckHonJ/gN+rJp7s389cIzjFZDEt56hr6hfVuh4Xe 9uo5rjc9b0V8Y3X+Jp/hHGhf9A== X-Google-Smtp-Source: ABdhPJzageDhpllf/yEdmFCfj98cPDS81048Tth+APVCmjc5qLlKcXX2da74AD8VO7humjw9AJcACg== X-Received: by 2002:a5d:4dc9:: with SMTP id f9mr3574476wru.407.1589540339421; Fri, 15 May 2020 03:58:59 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id s2sm3010008wme.33.2020.05.15.03.58.58 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:58:58 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 03/14] arm64: kvm: Formalize hypcall ABI Date: Fri, 15 May 2020 11:58:30 +0100 Message-Id: <20200515105841.73532-4-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035901_332692_3A08DCEB X-CRM114-Status: GOOD ( 20.74 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:444 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In preparation for unmapping .hyp.text from kernel memory mappings, convert the current EL1 - EL2 KVM ABI to use hypercall numbers in lieu of direct function pointers. While this in itself does not provide any isolation, it is a first step towards having a well-defined EL2 ABI. The implementation is based on a jump table to known host HVC handlers indexed by the hypercall ID. Relative-offset branches were chosen over a sys_call_table-like array of function pointers to avoid the need for re-computing the addresses under hyp memory mappings. Hypcall IDs start at 0x1000 because comments in hyp.S state that lower IDs are allocated for hyp stub operations. This was not originally honored by hyp-entry.S, only the actually used IDs would be recognized and all other values would be treated as function pointers. This is cleaned up and all IDs lower than 0x1000 are routed to __kvm_handle_stub_hvc. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_host.h | 10 ++-- arch/arm64/include/asm/kvm_host_hypercalls.h | 59 ++++++++++++++++++++ arch/arm64/kvm/hyp.S | 18 +++--- arch/arm64/kvm/hyp/hyp-entry.S | 56 +++++++++++-------- 4 files changed, 107 insertions(+), 36 deletions(-) create mode 100644 arch/arm64/include/asm/kvm_host_hypercalls.h diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 32c8a675e5a4..132233b6d853 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -24,6 +24,7 @@ #include #include #include +#include #include #define __KVM_HAVE_ARCH_INTC_INITIALIZED @@ -446,7 +447,7 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); void kvm_arm_halt_guest(struct kvm *kvm); void kvm_arm_resume_guest(struct kvm *kvm); -u64 __kvm_call_hyp(void *hypfn, ...); +u64 __kvm_call_hyp(unsigned long hcall_id, ...); /* * The couple of isb() below are there to guarantee the same behaviour @@ -459,7 +460,8 @@ u64 __kvm_call_hyp(void *hypfn, ...); f(__VA_ARGS__); \ isb(); \ } else { \ - __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__); \ + __kvm_call_hyp(KVM_HOST_HCALL_ID(f), \ + ##__VA_ARGS__); \ } \ } while(0) @@ -471,7 +473,7 @@ u64 __kvm_call_hyp(void *hypfn, ...); ret = f(__VA_ARGS__); \ isb(); \ } else { \ - ret = __kvm_call_hyp(kvm_ksym_ref(f), \ + ret = __kvm_call_hyp(KVM_HOST_HCALL_ID(f), \ ##__VA_ARGS__); \ } \ \ @@ -551,7 +553,7 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, * cpus_have_const_cap() wrapper. */ BUG_ON(!system_capabilities_finalized()); - __kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr, tpidr_el2); + __kvm_call_hyp((unsigned long)pgd_ptr, hyp_stack_ptr, vector_ptr, tpidr_el2); /* * Disabling SSBD on a non-VHE system requires us to enable SSBS diff --git a/arch/arm64/include/asm/kvm_host_hypercalls.h b/arch/arm64/include/asm/kvm_host_hypercalls.h new file mode 100644 index 000000000000..ed02878fbda5 --- /dev/null +++ b/arch/arm64/include/asm/kvm_host_hypercalls.h @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 Google, inc + */ + +#ifndef __KVM_HOST_HCALL +#define __KVM_HOST_HCALL(x) +#endif + +#define __KVM_HOST_HCALL_ID___kvm_enable_ssbs 0 +__KVM_HOST_HCALL(__kvm_enable_ssbs) + +#define __KVM_HOST_HCALL_ID___kvm_get_mdcr_el2 1 +__KVM_HOST_HCALL(__kvm_get_mdcr_el2) + +#define __KVM_HOST_HCALL_ID___kvm_timer_set_cntvoff 2 +__KVM_HOST_HCALL(__kvm_timer_set_cntvoff) + +#define __KVM_HOST_HCALL_ID___kvm_tlb_flush_local_vmid 3 +__KVM_HOST_HCALL(__kvm_tlb_flush_local_vmid) + +#define __KVM_HOST_HCALL_ID___kvm_flush_vm_context 4 +__KVM_HOST_HCALL(__kvm_flush_vm_context) + +#define __KVM_HOST_HCALL_ID___kvm_vcpu_run_nvhe 5 +__KVM_HOST_HCALL(__kvm_vcpu_run_nvhe) + +#define __KVM_HOST_HCALL_ID___kvm_tlb_flush_vmid 6 +__KVM_HOST_HCALL(__kvm_tlb_flush_vmid) + +#define __KVM_HOST_HCALL_ID___kvm_tlb_flush_vmid_ipa 7 +__KVM_HOST_HCALL(__kvm_tlb_flush_vmid_ipa) + +#define __KVM_HOST_HCALL_ID___vgic_v3_init_lrs 8 +__KVM_HOST_HCALL(__vgic_v3_init_lrs) + +#define __KVM_HOST_HCALL_ID___vgic_v3_get_ich_vtr_el2 9 +__KVM_HOST_HCALL(__vgic_v3_get_ich_vtr_el2) + +#define __KVM_HOST_HCALL_ID___vgic_v3_write_vmcr 10 +__KVM_HOST_HCALL(__vgic_v3_write_vmcr) + +#define __KVM_HOST_HCALL_ID___vgic_v3_restore_aprs 11 +__KVM_HOST_HCALL(__vgic_v3_restore_aprs) + +#define __KVM_HOST_HCALL_ID___vgic_v3_read_vmcr 12 +__KVM_HOST_HCALL(__vgic_v3_read_vmcr) + +#define __KVM_HOST_HCALL_ID___vgic_v3_save_aprs 13 +__KVM_HOST_HCALL(__vgic_v3_save_aprs) + +#define KVM_HOST_HCALL_NR 14 + +/* + * Offset KVM hypercall IDs to avoid clashing with stub hypercalls + * (defined in asm/virt.h). + */ +#define KVM_HOST_HCALL_BASE (0x1000UL) +#define KVM_HOST_HCALL_ID(name) (KVM_HOST_HCALL_BASE + __KVM_HOST_HCALL_ID_##name) diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index 3c79a1124af2..f603d03cb599 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -11,22 +11,20 @@ #include /* - * u64 __kvm_call_hyp(void *hypfn, ...); + * u64 __kvm_call_hyp(unsigned long hcall_id, ...); * * This is not really a variadic function in the classic C-way and care must * be taken when calling this to ensure parameters are passed in registers * only, since the stack will change between the caller and the callee. * - * Call the function with the first argument containing a pointer to the - * function you wish to call in Hyp mode, and subsequent arguments will be - * passed as x0, x1, and x2 (a maximum of 3 arguments in addition to the - * function pointer can be passed). The function being called must be mapped - * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are - * passed in x0. + * Call the function with the first argument containing ID of the function + * you wish to call in Hyp mode, as defined in kvm_host_hypercalls.h, and + * subsequent arguments will be passed as x0, x1, and x2 (a maximum of + * 3 arguments in addition to the hypcall ID can be passed). Return values + * are passed in x0. * - * A function pointer with a value less than 0xfff has a special meaning, - * and is used to implement hyp stubs in the same way as in - * arch/arm64/kernel/hyp_stub.S. + * Hypcalls with ID less than 0x1000 are propagated to operations implemented + * in arch/arm64/kernel/hyp_stub.S. */ SYM_FUNC_START(__kvm_call_hyp) hvc #0 diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 65ff99a7e02d..ab14de8d0131 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -13,25 +13,12 @@ #include #include #include +#include #include .text .pushsection .hyp.text, "ax" -.macro do_el2_call - /* - * Shuffle the parameters before calling the function - * pointed to in x0. Assumes parameters in x[1,2,3]. - */ - str lr, [sp, #-16]! - mov lr, x0 - mov x0, x1 - mov x1, x2 - mov x2, x3 - blr lr - ldr lr, [sp], #16 -.endm - el1_sync: // Guest trapped into EL2 mrs x0, esr_el2 @@ -46,9 +33,9 @@ el1_sync: // Guest trapped into EL2 /* Here, we're pretty sure the host called HVC. */ ldp x0, x1, [sp], #16 - /* Check for a stub HVC call */ - cmp x0, #HVC_STUB_HCALL_NR - b.hs 1f + /* Check if hcall ID (x0) is in the hyp stub hypercall range. */ + cmp x0, #KVM_HOST_HCALL_BASE + b.hs el1_host_hcall /* * Compute the idmap address of __kvm_handle_stub_hvc and @@ -65,13 +52,38 @@ el1_sync: // Guest trapped into EL2 sub x5, x5, x6 br x5 +el1_host_hcall: + /* Check if hcall ID (x0) is in the KVM host hypercall range. */ + sub x0, x0, #KVM_HOST_HCALL_BASE + cmp x0, #KVM_HOST_HCALL_NR + b.hs el1_host_invalid_hvc + + /* Compute address of corresponding branch in the jump table below. */ + adr x10, 1f + add x10, x10, x0, lsl #2 + + /* Call the host HVC handler. Arguments are in x[1,2,3]. */ + mov x0, x1 + mov x1, x2 + mov x2, x3 + str lr, [sp, #-16]! + adr lr, 2f + br x10 + + /* Generate jump table of branches to all defined host HVC handlers. */ 1: - /* - * Perform the EL2 call - */ - kern_hyp_va x0 - do_el2_call +#undef __KVM_HOST_HCALL +#define __KVM_HOST_HCALL(hcall_fn_name) \ + b hcall_fn_name +#include + +2: + ldr lr, [sp], #16 + eret + sb +el1_host_invalid_hvc: + mov x0, -ENOSYS eret sb From patchwork Fri May 15 10:58:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551201 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9BC44618 for ; Fri, 15 May 2020 11:01:02 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7A5F020709 for ; Fri, 15 May 2020 11:01:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Yz6bUx0P"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="a2mePthr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7A5F020709 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=n3nyS6iRE5XkC+xbaCSrdxFG40juylo8xThvK0p6SCc=; b=Yz6bUx0PSW6B9F eFPS93FlXe3lrUe+0CMEW15XhVjTtbxsMDYI3I8x8Ccz923d1o9j0Gw98wqLyvZlrGBZYtbJd8bJP Eg6xnGc++ECS0SCbfBCXgzZhUiAJLp/bdOLFTVS6FLjEpwkUGHtXNsGYNBEhjY12IUdGvYqYVlWpw xdEFtzsA2QRUQmKIlB2iUb9qcw62ilChKcGcrfk51UJOjKsptN5g1wgpyLL+qwsD7eDg9fUEQWluu qK59gmZ/1hWscuLerZwSBDsmFhV2qMauO4fueayiwsjHVx3QWHgHrUL+bvbZDgKaIXBkNLKcpxd9b yMdoeU4/itsQ4etZre+g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY5F-0006W8-Tl; Fri, 15 May 2020 11:00:53 +0000 Received: from mail-wm1-x344.google.com ([2a00:1450:4864:20::344]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3S-0001oJ-Tr for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:04 +0000 Received: by mail-wm1-x344.google.com with SMTP id n5so2193626wmd.0 for ; Fri, 15 May 2020 03:59:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WNlsZuKpbaQAO6z1kYuuvPgoFNE3ZTHmZItsK3A1s/4=; b=a2mePthrCkqiv1vOToW6CSfM1wFwyuINH7E8m7j8zyotUbo2SiNOEq+uMYyTMNBrtJ E/gcD+YlaC+y93uhxlTueo9WVda7X9RPZS9K9DakyLmz4dxE7PEbobbL0dVXCmzJvfwK uSYXjdVnwaCRcDVzs1ieIkpKYKB31AtCEcrjY/80JPlRcYqaYZHFB62siP6rxzQ62HlC 39GnrwVZIFOPec4OPezzfAKoarg0qnJu/kfJUsLlac1ZOV5MD/UkDmsQ9mJRqlPyCIPC hO0pFxf+gCVfPSSf3D5JAeZ9sNEjhuHQ/Rl9LGLWrQA7jVHDWX+zL/dPlq28Yb5Vz9lH QAmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WNlsZuKpbaQAO6z1kYuuvPgoFNE3ZTHmZItsK3A1s/4=; b=mLy7w1vm1HuAWRRnuNwyIHxhEex6kBiDoN2cL58CapPpXICoNgxbEgmJTJSvf1uacJ 2UoP0Yd/0+Uek1KAwPSzapqdvUWIV/PJTnfq4hLhLo0jcPPVjrTOnlwNJT90g8MshHeH s6o49Eb7hHl8GttYZDGiVZImIX5/euBJ1zyPQagEUVqY6JrIt0ssjpm6UaabvyKRye1D zrvTbda1izPERFDBbQNFnIkug11dc4lvlj2QQCwax3lupgOa6LRvETFguKoxyxceYB+6 ecqyyZCr+q1sI62wpE55Eg6ip0lhHYeDomGXcE1uxycobe2Z3cnkWaU5aTE2eWh2uRH6 MSJw== X-Gm-Message-State: AOAM5331fNwvbvltAeJ5zgNsZhbpgM6F0x7Xw2UqfAlGPxyoMbBfcbKv sFM9BPzSTYC2uI9i9nnm5/5Okg== X-Google-Smtp-Source: ABdhPJzyvDNr0FNuz6vpDMEhT8+N3AHdMTtoUX6V7+HUHnGp3w2eUv3jrhxHrrXAFF5VEGySY19F+g== X-Received: by 2002:a7b:c778:: with SMTP id x24mr3625426wmk.144.1589540341191; Fri, 15 May 2020 03:59:01 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id p9sm3130293wrj.29.2020.05.15.03.59.00 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:59:00 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 04/14] arm64: kvm: Add build rules for separate nVHE object files Date: Fri, 15 May 2020 11:58:31 +0100 Message-Id: <20200515105841.73532-5-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035902_971894_5DD9DEE8 X-CRM114-Status: GOOD ( 18.15 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:344 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Add new folder arch/arm64/kvm/hyp/nvhe and a Makefile for building code that runs in EL2 under nVHE KVM. Compile each source file into a `.hyp.tmp.o` object first, then prefix all its symbols with "__kvm_nvhe_" using `objcopy` and produce a `.hyp.o`. Suffixes were chosen so that it would be possible for VHE and nVHE to share some source files, but compiled with different CFLAGS. nVHE build rules add -D__KVM_NVHE_HYPERVISOR__. The nVHE ELF symbol prefix is added to kallsyms.c as ignored. EL2-only symbols will never appear in EL1 stack traces. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 12 +++++++++++ arch/arm64/kvm/hyp/Makefile | 4 ++-- arch/arm64/kvm/hyp/nvhe/Makefile | 35 ++++++++++++++++++++++++++++++++ scripts/kallsyms.c | 1 + 4 files changed, 50 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/Makefile diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 7f06ad93fc95..13850134fc28 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -51,4 +51,16 @@ __efistub__ctype = _ctype; #endif +#ifdef CONFIG_KVM + +/* + * KVM nVHE code has its own symbol namespace prefixed by __hyp_text_, to + * isolate it from the kernel proper. The following symbols are legally + * accessed by it, therefore provide aliases to make them linkable. + * Do not include symbols which may not be safely accessed under hypervisor + * memory mappings. + */ + +#endif /* CONFIG_KVM */ + #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 5d8357ddc234..c9fd8618980d 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -6,10 +6,10 @@ ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-$(CONFIG_KVM) += hyp.o +obj-$(CONFIG_KVM) += vhe.o nvhe/ obj-$(CONFIG_KVM_INDIRECT_VECTORS) += smccc_wa.o -hyp-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ +vhe-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ debug-sr.o entry.o switch.o fpsimd.o tlb.o hyp-entry.o # KVM code is run at a different exception code with a different map, so diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile new file mode 100644 index 000000000000..7d64235dba62 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -0,0 +1,35 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for Kernel-based Virtual Machine module, HYP/nVHE part +# + +asflags-y := -D__KVM_NVHE_HYPERVISOR__ +ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ + -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) + +obj-y := + +obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) +extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) + +$(obj)/%.hyp.tmp.o: $(src)/%.c FORCE + $(call if_changed_rule,cc_o_c) +$(obj)/%.hyp.tmp.o: $(src)/%.S FORCE + $(call if_changed_rule,as_o_S) +$(obj)/%.hyp.o: $(obj)/%.hyp.tmp.o FORCE + $(call if_changed,hypcopy) + +quiet_cmd_hypcopy = HYPCOPY $@ + cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__kvm_nvhe_ $< $@ + +# KVM nVHE code is run at a different exception code with a different map, so +# compiler instrumentation that inserts callbacks or checks into the code may +# cause crashes. Just disable it. +GCOV_PROFILE := n +KASAN_SANITIZE := n +UBSAN_SANITIZE := n +KCOV_INSTRUMENT := n + +# Skip objtool checking for this directory because nVHE code is compiled with +# non-standard build rules. +OBJECT_FILES_NON_STANDARD := y diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c index 3e8dea6e0a95..523a1a337ebd 100644 --- a/scripts/kallsyms.c +++ b/scripts/kallsyms.c @@ -109,6 +109,7 @@ static bool is_ignored_symbol(const char *name, char type) ".LASANPC", /* s390 kasan local symbols */ "__crc_", /* modversions */ "__efistub_", /* arm64 EFI stub namespace */ + "__kvm_nvhe_", /* arm64 non-VHE KVM namespace */ NULL }; From patchwork Fri May 15 10:58:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551203 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0CC3913 for ; Fri, 15 May 2020 11:01:12 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B35D920709 for ; Fri, 15 May 2020 11:01:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Uzxp5Vh1"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="kORIAiHI" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B35D920709 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=F9aO+BBzt0qLSPyzdyvf24HPKpAdY47weqqY/xWzTiM=; b=Uzxp5Vh1VgtTBL zw86OMNHiAEVsBGATtcRC0InXh3kDTnYo4X7/oliNMjnX6m3gqu6kRcxWITcPhyk86IITEw/1gClz rVLRBAjMXMcdXqmCOVek5rhNvGzl1DkYk2RLaDIAxjH7zdMcYY/i3NYNu/51v3p4mHRT9A7iPFCra lxirvgqrnN1ZrektUFSp2lSeUFY1iuhosltLQYYQGfetPn9SGUB6WUb0VHcPeAPoGiCdh0kiVB7pf ZbESNzbYbxbX4jnRsIsPYErgkhTjkO8rOjeLTyI12MucUj6O6hI92sU1x25I4WrKYb89WfKEu9/Um Tvfah1O06zR9MTfaT+Kg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY5W-0006oK-HC; Fri, 15 May 2020 11:01:10 +0000 Received: from mail-wm1-x341.google.com ([2a00:1450:4864:20::341]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3U-0001r3-Oh for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:08 +0000 Received: by mail-wm1-x341.google.com with SMTP id g14so16533903wme.1 for ; Fri, 15 May 2020 03:59:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/saoxLzK90JZJFKv9cA9pNeOmBHJRKCJDK5y22zbNZc=; b=kORIAiHIEKPrF1Kyrlm7Vftogj3IZCDUZqDv/LsKMvCHMe7gpfJeCeZ6r9AIGiOWq3 8ONVDRsIOCg3Kfji0CcMwVU2iFxNX3WKFVKPXHCSFrB0AHDtAUiwOwvNFjqjRIaeCCcD L7F5zbQzmbl5QWfYfKuzXoXH2u83K7a/U8QfJY7wdOTCqYGv+dSQsAFUqPp8X8S90Q2o icSBcBGJ1EWEdDt/Uo4OWdCuy99B7GosMhBl2bA2oCbUQnWt6Cu7NoRpuH52INYTdMU/ k6yCWnUhAl++Udm59+ZZCN1/nx6Xk9A/LL+hwAQIzp5rHbUPVtvH7UVy5Ehls4KzBBVe ijjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/saoxLzK90JZJFKv9cA9pNeOmBHJRKCJDK5y22zbNZc=; b=LJX6Wjllw1DPF8VI6UsdTXCGTWsmA9p7fTrTJ2ChnAapMNlqy7PnAkk7RF0FkUg27a uR7xTmDus1jb4Oly8LT4XB6AjgySELZtFswx+vE3Ggw36Zdzq+A0s6gkRoaveBcviS4M sYdv2AHdmIHjHiO7vom+UgX09xgM6LgzMEUoC/k+p7NCbzxSJyPQwsfZpNf8M9kHRFu7 TS0qFBhLdK4Mu19cE02qY9HVbmAhTUZQYtCgr+QeKHqloZ9gvHXzQ6z9s/VbQHUAWWIM sp5X8ezpeVfG/CXSAvmDaxi07KU/4/9RtHf/Y0GD98AftS9vJKr5yRLIMIo7dYjZmDlW dW6w== X-Gm-Message-State: AOAM530I5Z+/I71fcLt+eqWIMTYRHxmtdTFWXFib88iuUWy0ZTH7rzJg uL4RMHyqK7DjRTY9xcObgh1T5g== X-Google-Smtp-Source: ABdhPJwwB0QO6cu9C49/M2CcaZGeJ8tiKTHqKxWYqZyiwYIjWsb3eUnN+QIM17TGSNwwkdlefEG+TA== X-Received: by 2002:a1c:7d43:: with SMTP id y64mr3548231wmc.46.1589540343004; Fri, 15 May 2020 03:59:03 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id c16sm2991865wrv.62.2020.05.15.03.59.01 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:59:02 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 05/14] arm64: kvm: Build hyp-entry.S separately for VHE/nVHE Date: Fri, 15 May 2020 11:58:32 +0100 Message-Id: <20200515105841.73532-6-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035904_857142_2BF19552 X-CRM114-Status: GOOD ( 21.12 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:341 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. hyp-entry.S contains implementation of KVM hyp vectors. This code is mostly shared between VHE/nVHE, therefore compile it under both VHE and nVHE build rules. nVHE-specific host HVC handler is hidden behind __KVM_NVHE_HYPERVISOR__. Adjust code which selects which KVM hyp vecs to install to choose the correct VHE/nVHE symbol. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_asm.h | 25 ++++++++++++++++++++++++- arch/arm64/include/asm/kvm_mmu.h | 16 ++++++++++------ arch/arm64/include/asm/mmu.h | 7 ------- arch/arm64/kernel/cpu_errata.c | 4 +++- arch/arm64/kernel/image-vars.h | 29 ++++++++++++++++++++++++++++- arch/arm64/kvm/hyp/hyp-entry.S | 2 ++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/va_layout.c | 2 +- 8 files changed, 69 insertions(+), 18 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 7c7eeeaab9fa..01242f54c48f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -42,6 +42,22 @@ #include +/* + * Translate name of a symbol defined in VHE/nVHE hyp implementations + * to the name seen by kernel proper. All nVHE symbols are prefixed by + * the build system to avoid clashes with the VHE variants. + */ +#define __kvm_vhe_sym(sym) sym +#define __kvm_nvhe_sym(sym) __kvm_nvhe_##sym + +/* + * Define a pair of symbols sharing the same name but one defined in + * VHE and the other in nVHE hyp implementations. + */ +#define DECLARE_KVM_HYP_SYM(sym) \ + extern char __kvm_vhe_sym(sym)[]; \ + extern char __kvm_nvhe_sym(sym)[] + /* Translate a kernel address of @sym into its equivalent linear mapping */ #define kvm_ksym_ref(sym) \ ({ \ @@ -50,6 +66,8 @@ val = lm_alias(&sym); \ val; \ }) +#define kvm_ksym_ref_vhe(sym) kvm_ksym_ref(__kvm_vhe_sym(sym)) +#define kvm_ksym_ref_nvhe(sym) kvm_ksym_ref(__kvm_nvhe_sym(sym)) struct kvm; struct kvm_vcpu; @@ -57,7 +75,12 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern char __kvm_hyp_vector[]; +DECLARE_KVM_HYP_SYM(__kvm_hyp_vector); + +#ifdef CONFIG_KVM_INDIRECT_VECTORS +DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs); +extern atomic_t arm64_el2_vector_last_slot; +#endif extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 30b0e8d6b895..871ef591042c 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -477,11 +477,15 @@ extern int __kvm_harden_el2_vector_slot; static inline void *kvm_get_hyp_vector(void) { struct bp_hardening_data *data = arm64_get_bp_hardening_data(); - void *vect = kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector)); int slot = -1; + void *vect = kern_hyp_va(has_vhe() + ? kvm_ksym_ref_vhe(__kvm_hyp_vector) + : kvm_ksym_ref_nvhe(__kvm_hyp_vector)); if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) { - vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs)); + vect = kern_hyp_va(has_vhe() + ? kvm_ksym_ref_vhe(__bp_harden_hyp_vecs) + : kvm_ksym_ref_nvhe(__bp_harden_hyp_vecs)); slot = data->hyp_vectors_slot; } @@ -510,12 +514,11 @@ static inline int kvm_map_vectors(void) * HBP + HEL2 -> use hardened vertors and use exec mapping */ if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) { - __kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs); - __kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base); + __kvm_bp_vect_base = kern_hyp_va(kvm_ksym_ref_nvhe(__bp_harden_hyp_vecs)); } if (cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS)) { - phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs); + phys_addr_t vect_pa = __pa_symbol(__kvm_nvhe_sym(__bp_harden_hyp_vecs)); unsigned long size = __BP_HARDEN_HYP_VECS_SZ; /* @@ -534,7 +537,8 @@ static inline int kvm_map_vectors(void) #else static inline void *kvm_get_hyp_vector(void) { - return kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector)); + return kern_hyp_va(has_vhe() ? kvm_ksym_ref_vhe(__kvm_hyp_vector) + : kvm_ksym_ref_nvhe(__kvm_hyp_vector)); } static inline int kvm_map_vectors(void) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 68140fdd89d6..4d913f6dd366 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -42,13 +42,6 @@ struct bp_hardening_data { bp_hardening_cb_t fn; }; -#if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \ - defined(CONFIG_HARDEN_EL2_VECTORS)) - -extern char __bp_harden_hyp_vecs[]; -extern atomic_t arm64_el2_vector_last_slot; -#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */ - #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index a102321fc8a2..94af3af12f44 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -117,7 +117,9 @@ DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) { - void *dst = lm_alias(__bp_harden_hyp_vecs + slot * SZ_2K); + char *vec = has_vhe() ? __kvm_vhe_sym(__bp_harden_hyp_vecs) + : __kvm_nvhe_sym(__bp_harden_hyp_vecs); + void *dst = lm_alias(vec + slot * SZ_2K); int i; for (i = 0; i < SZ_2K; i += 0x80) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 13850134fc28..dc9c14d91d39 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -54,13 +54,40 @@ __efistub__ctype = _ctype; #ifdef CONFIG_KVM /* - * KVM nVHE code has its own symbol namespace prefixed by __hyp_text_, to + * KVM nVHE code has its own symbol namespace prefixed by __kvm_nvhe_, to * isolate it from the kernel proper. The following symbols are legally * accessed by it, therefore provide aliases to make them linkable. * Do not include symbols which may not be safely accessed under hypervisor * memory mappings. */ +__kvm_nvhe___guest_exit = __guest_exit; +__kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; +__kvm_nvhe___kvm_flush_vm_context = __kvm_flush_vm_context; +__kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; +__kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; +__kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; +__kvm_nvhe___kvm_tlb_flush_local_vmid = __kvm_tlb_flush_local_vmid; +__kvm_nvhe___kvm_tlb_flush_vmid = __kvm_tlb_flush_vmid; +__kvm_nvhe___kvm_tlb_flush_vmid_ipa = __kvm_tlb_flush_vmid_ipa; +__kvm_nvhe___kvm_vcpu_run_nvhe = __kvm_vcpu_run_nvhe; +__kvm_nvhe___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; +__kvm_nvhe___vgic_v3_init_lrs = __vgic_v3_init_lrs; +__kvm_nvhe___vgic_v3_read_vmcr = __vgic_v3_read_vmcr; +__kvm_nvhe___vgic_v3_restore_aprs = __vgic_v3_restore_aprs; +__kvm_nvhe___vgic_v3_save_aprs = __vgic_v3_save_aprs; +__kvm_nvhe___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; +__kvm_nvhe_abort_guest_exit_end = abort_guest_exit_end; +__kvm_nvhe_abort_guest_exit_start = abort_guest_exit_start; +__kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; +__kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; +__kvm_nvhe_hyp_panic = hyp_panic; +__kvm_nvhe_kimage_voffset = kimage_voffset; +__kvm_nvhe_kvm_host_data = kvm_host_data; +__kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; +__kvm_nvhe_kvm_update_va_mask = kvm_update_va_mask; +__kvm_nvhe_panic = panic; + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index ab14de8d0131..81c65fa65183 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -27,6 +27,7 @@ el1_sync: // Guest trapped into EL2 ccmp x0, #ESR_ELx_EC_HVC32, #4, ne b.ne el1_trap +#ifdef __KVM_NVHE_HYPERVISOR__ mrs x1, vttbr_el2 // If vttbr is valid, the guest cbnz x1, el1_hvc_guest // called HVC @@ -86,6 +87,7 @@ el1_host_invalid_hvc: mov x0, -ENOSYS eret sb +#endif /* __KVM_NVHE_HYPERVISOR__ */ el1_hvc_guest: /* diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 7d64235dba62..c68801e24950 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := +obj-y := ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c index a4f48c1ac28c..157d106235f7 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -150,7 +150,7 @@ void kvm_patch_vector_branch(struct alt_instr *alt, /* * Compute HYP VA by using the same computation as kern_hyp_va() */ - addr = (uintptr_t)kvm_ksym_ref(__kvm_hyp_vector); + addr = (uintptr_t)kvm_ksym_ref_nvhe(__kvm_hyp_vector); addr &= va_mask; addr |= tag_val << tag_lsb; From patchwork Fri May 15 10:58:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551205 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1728D618 for ; Fri, 15 May 2020 11:01:28 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C6A3720709 for ; Fri, 15 May 2020 11:01:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="P0CWYRK9"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="Cv2Z3CuG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C6A3720709 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Za8AQ0luo9CbBFxQ5ogV1NmofX8GG8KDtiS8Q3u4cKM=; b=P0CWYRK9u4aEyW RzqwM8HB7mMd6yNiruSD4LoOkNkfDN6DI/UBY7i/NSRxF3gZajfdyP/bkwQNrWbjUIke074pUT+EZ A6FrjMIP4whSjxjKc1Bb62jjG8QFJ0tcRyz2lW6BekeyIk91Q9cWBvohvbODmDXTiwB2ZQREv7sra Yvq5qHerFfW+CHpWZzjR0xr9F7K5h2jNi4U29yrZQcCm1NBStskgVXfe9jgO1Wisr9Jy4HdptM47b GIzvrt6glcnE3EilITHVBtPo6fP1UpXirbE1B+E5o130tbXqHehAy8pNQcQ5+pvdoRD+myjcEKq+3 8NogAWRea/hc0zf0XUWw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY5n-00074J-2m; Fri, 15 May 2020 11:01:27 +0000 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3X-0001tM-9S for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:13 +0000 Received: by mail-wr1-x444.google.com with SMTP id k13so916376wrx.3 for ; Fri, 15 May 2020 03:59:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tabIxhAS7YNuq5iCO/oFZ2hYhqRnLr3yUyNvBgRfCZI=; b=Cv2Z3CuGJLkWoPoCimitXSrNUULHZxg1cIHDIbhkX9ythTDj6NPfhsSKwZfQMAad7R G8L8qGHU2YdJSJs0vMwW0bHciNFnynqNqifp9zUqPK8jvxI0+FrCWHWLl5GKZ7kiQuq+ 2Ct0mAyOeRk7Vfi+KB2MVlbHyKMLXAwB8AWaY6jx/tMnkkYy6Jqr6RwcPTT6lEsSuBdU d0Ytjdh5mz9lTlZep52k8KAq+aey6p69CRiy2wcG3bQxAm7g7uempDZEOngS+LaLfWxO xXWJbeoD+BxMKhWq98ygvo2MJK3RfqowrM2Z7UEr1SpDAIhXDcpirBfCPfxNAVQnQPH5 fm6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tabIxhAS7YNuq5iCO/oFZ2hYhqRnLr3yUyNvBgRfCZI=; b=D/wdv7Wp+Gftris82GrD7qbCEsRatlzZpy79+NkrHkNfYBk4zE/f8Owisg7iqkdOeU 0tQDWoQow2sFSErpqmQXMeRIuQnWqqs1pkjb0LxLVJwdK7ApUeXdksKwbuhl1Bz0NARp sx8BBZPHOfI+38kSs9e+CIKHXz2ghIXMLF+B9HB++QnO+wZkPr784oq9YfSdjayTvij7 T8NB85tyc+8NtTdCDs54mg//gtFUSL02NgLl33JcyaGVzWCfWdghDzAIXvT7YBQpFWfa liBxMA1lny+dBdDCEwSTdUTPkfA74mrRl3wlAlfgrpcGMBMbPkXN6eTKTPudmpMwbH7C zeKA== X-Gm-Message-State: AOAM531XssX0B0dmod6jlLrvO3APF/MoowpT6zlNiQU8jfOjpF6Xlc0e V+mxMyTaJzYi3KnMdFr1gfP0WQ== X-Google-Smtp-Source: ABdhPJxZuvOZCjCg1QHmgjP/CO6h4HqfQEUTFoOeBq82tqx37xijcHNEIVHnSkTzffpIwQpq9a0jhQ== X-Received: by 2002:adf:8403:: with SMTP id 3mr3511259wrf.186.1589540345053; Fri, 15 May 2020 03:59:05 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id p10sm3072695wrn.10.2020.05.15.03.59.03 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:59:04 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 06/14] arm64: kvm: Split hyp/tlb.c to VHE/nVHE Date: Fri, 15 May 2020 11:58:33 +0100 Message-Id: <20200515105841.73532-7-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035907_404141_E034AA65 X-CRM114-Status: GOOD ( 29.50 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:444 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. tlb.c contains code for flushing the TLB, with parts shared between VHE/nVHE. These common routines are moved into a header file tlb.h, VHE-specific code remains in tlb.c and nVHE-specific code is moved to nvhe/tlb.c. The header file expects its users to implement two helper functions declared at the top of the file. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 8 +- arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/tlb.c | 69 +++++++++++++ arch/arm64/kvm/hyp/tlb.c | 170 +++---------------------------- arch/arm64/kvm/hyp/tlb.h | 134 ++++++++++++++++++++++++ 5 files changed, 221 insertions(+), 162 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/tlb.c create mode 100644 arch/arm64/kvm/hyp/tlb.h diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index dc9c14d91d39..7cafa0266847 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -62,14 +62,11 @@ __efistub__ctype = _ctype; */ __kvm_nvhe___guest_exit = __guest_exit; +__kvm_nvhe___icache_flags = __icache_flags; __kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; -__kvm_nvhe___kvm_flush_vm_context = __kvm_flush_vm_context; __kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; __kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__kvm_nvhe___kvm_tlb_flush_local_vmid = __kvm_tlb_flush_local_vmid; -__kvm_nvhe___kvm_tlb_flush_vmid = __kvm_tlb_flush_vmid; -__kvm_nvhe___kvm_tlb_flush_vmid_ipa = __kvm_tlb_flush_vmid_ipa; __kvm_nvhe___kvm_vcpu_run_nvhe = __kvm_vcpu_run_nvhe; __kvm_nvhe___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; __kvm_nvhe___vgic_v3_init_lrs = __vgic_v3_init_lrs; @@ -79,8 +76,11 @@ __kvm_nvhe___vgic_v3_save_aprs = __vgic_v3_save_aprs; __kvm_nvhe___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; __kvm_nvhe_abort_guest_exit_end = abort_guest_exit_end; __kvm_nvhe_abort_guest_exit_start = abort_guest_exit_start; +__kvm_nvhe_arm64_const_caps_ready = arm64_const_caps_ready; __kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; __kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; +__kvm_nvhe_cpu_hwcap_keys = cpu_hwcap_keys; +__kvm_nvhe_cpu_hwcaps = cpu_hwcaps; __kvm_nvhe_hyp_panic = hyp_panic; __kvm_nvhe_kimage_voffset = kimage_voffset; __kvm_nvhe_kvm_host_data = kvm_host_data; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index c68801e24950..bed7260097f5 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := ../hyp-entry.o +obj-y := tlb.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c new file mode 100644 index 000000000000..1b8f4000f98c --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -0,0 +1,69 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include + +#include +#include +#include + +#include "../tlb.h" + +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt) +{ + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { + u64 val; + + /* + * For CPUs that are affected by ARM 1319367, we need to + * avoid a host Stage-1 walk while we have the guest's + * VMID set in the VTTBR in order to invalidate TLBs. + * We're guaranteed that the S1 MMU is enabled, so we can + * simply set the EPD bits to avoid any further TLB fill. + */ + val = cxt->tcr = read_sysreg_el1(SYS_TCR); + val |= TCR_EPD1_MASK | TCR_EPD0_MASK; + write_sysreg_el1(val, SYS_TCR); + isb(); + } + + __load_guest_stage2(kvm); + isb(); +} + +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt) +{ + write_sysreg(0, vttbr_el2); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { + /* Ensure write of the host VMID */ + isb(); + /* Restore the host's TCR_EL1 */ + write_sysreg_el1(cxt->tcr, SYS_TCR); + } +} + +void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +{ + __tlb_flush_vmid_ipa(kvm, ipa); +} + +void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) +{ + __tlb_flush_vmid(kvm); +} + +void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +{ + __tlb_flush_local_vmid(vcpu); +} + +void __hyp_text __kvm_flush_vm_context(void) +{ + __tlb_flush_vm_context(); +} diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index ceaddbe4279f..ab55b0c4a80c 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -10,14 +10,10 @@ #include #include -struct tlb_inv_context { - unsigned long flags; - u64 tcr; - u64 sctlr; -}; +#include "tlb.h" -static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt) { u64 val; @@ -60,40 +56,8 @@ static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm, isb(); } -static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { - u64 val; - - /* - * For CPUs that are affected by ARM 1319367, we need to - * avoid a host Stage-1 walk while we have the guest's - * VMID set in the VTTBR in order to invalidate TLBs. - * We're guaranteed that the S1 MMU is enabled, so we can - * simply set the EPD bits to avoid any further TLB fill. - */ - val = cxt->tcr = read_sysreg_el1(SYS_TCR); - val |= TCR_EPD1_MASK | TCR_EPD0_MASK; - write_sysreg_el1(val, SYS_TCR); - isb(); - } - - __load_guest_stage2(kvm); - isb(); -} - -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt) -{ - if (has_vhe()) - __tlb_switch_to_guest_vhe(kvm, cxt); - else - __tlb_switch_to_guest_nvhe(kvm, cxt); -} - -static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt) { /* * We're done with the TLB operation, let's restore the host's @@ -112,130 +76,22 @@ static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm, local_irq_restore(cxt->flags); } -static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm, - struct tlb_inv_context *cxt) +void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { - write_sysreg(0, vttbr_el2); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { - /* Ensure write of the host VMID */ - isb(); - /* Restore the host's TCR_EL1 */ - write_sysreg_el1(cxt->tcr, SYS_TCR); - } + __tlb_flush_vmid_ipa(kvm, ipa); } -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt) +void __kvm_tlb_flush_vmid(struct kvm *kvm) { - if (has_vhe()) - __tlb_switch_to_host_vhe(kvm, cxt); - else - __tlb_switch_to_host_nvhe(kvm, cxt); + __tlb_flush_vmid(kvm); } -void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) { - struct tlb_inv_context cxt; - - dsb(ishst); - - /* Switch to requested VMID */ - kvm = kern_hyp_va(kvm); - __tlb_switch_to_guest(kvm, &cxt); - - /* - * We could do so much better if we had the VA as well. - * Instead, we invalidate Stage-2 for this IPA, and the - * whole of Stage-1. Weep... - */ - ipa >>= 12; - __tlbi(ipas2e1is, ipa); - - /* - * We have to ensure completion of the invalidation at Stage-2, - * since a table walk on another CPU could refill a TLB with a - * complete (S1 + S2) walk based on the old Stage-2 mapping if - * the Stage-1 invalidation happened first. - */ - dsb(ish); - __tlbi(vmalle1is); - dsb(ish); - isb(); - - /* - * If the host is running at EL1 and we have a VPIPT I-cache, - * then we must perform I-cache maintenance at EL2 in order for - * it to have an effect on the guest. Since the guest cannot hit - * I-cache lines allocated with a different VMID, we don't need - * to worry about junk out of guest reset (we nuke the I-cache on - * VMID rollover), but we do need to be careful when remapping - * executable pages for the same guest. This can happen when KSM - * takes a CoW fault on an executable page, copies the page into - * a page that was previously mapped in the guest and then needs - * to invalidate the guest view of the I-cache for that page - * from EL1. To solve this, we invalidate the entire I-cache when - * unmapping a page from a guest if we have a VPIPT I-cache but - * the host is running at EL1. As above, we could do better if - * we had the VA. - * - * The moral of this story is: if you have a VPIPT I-cache, then - * you should be running with VHE enabled. - */ - if (!has_vhe() && icache_is_vpipt()) - __flush_icache_all(); - - __tlb_switch_to_host(kvm, &cxt); + __tlb_flush_local_vmid(vcpu); } -void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) +void __kvm_flush_vm_context(void) { - struct tlb_inv_context cxt; - - dsb(ishst); - - /* Switch to requested VMID */ - kvm = kern_hyp_va(kvm); - __tlb_switch_to_guest(kvm, &cxt); - - __tlbi(vmalls12e1is); - dsb(ish); - isb(); - - __tlb_switch_to_host(kvm, &cxt); -} - -void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) -{ - struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); - struct tlb_inv_context cxt; - - /* Switch to requested VMID */ - __tlb_switch_to_guest(kvm, &cxt); - - __tlbi(vmalle1); - dsb(nsh); - isb(); - - __tlb_switch_to_host(kvm, &cxt); -} - -void __hyp_text __kvm_flush_vm_context(void) -{ - dsb(ishst); - __tlbi(alle1is); - - /* - * VIPT and PIPT caches are not affected by VMID, so no maintenance - * is necessary across a VMID rollover. - * - * VPIPT caches constrain lookup and maintenance to the active VMID, - * so we need to invalidate lines with a stale VMID to avoid an ABA - * race after multiple rollovers. - * - */ - if (icache_is_vpipt()) - asm volatile("ic ialluis"); - - dsb(ish); + __tlb_flush_vm_context(); } diff --git a/arch/arm64/kvm/hyp/tlb.h b/arch/arm64/kvm/hyp/tlb.h new file mode 100644 index 000000000000..841ef400c8ec --- /dev/null +++ b/arch/arm64/kvm/hyp/tlb.h @@ -0,0 +1,134 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#ifndef __ARM64_KVM_HYP_TLB_H__ +#define __ARM64_KVM_HYP_TLB_H__ + +#include + +#include +#include +#include + +struct tlb_inv_context { + unsigned long flags; + u64 tcr; + u64 sctlr; +}; + +static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, + struct tlb_inv_context *cxt); +static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, + struct tlb_inv_context *cxt); + +static inline void __hyp_text +__tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + kvm = kern_hyp_va(kvm); + __tlb_switch_to_guest(kvm, &cxt); + + /* + * We could do so much better if we had the VA as well. + * Instead, we invalidate Stage-2 for this IPA, and the + * whole of Stage-1. Weep... + */ + ipa >>= 12; + __tlbi(ipas2e1is, ipa); + + /* + * We have to ensure completion of the invalidation at Stage-2, + * since a table walk on another CPU could refill a TLB with a + * complete (S1 + S2) walk based on the old Stage-2 mapping if + * the Stage-1 invalidation happened first. + */ + dsb(ish); + __tlbi(vmalle1is); + dsb(ish); + isb(); + + /* + * If the host is running at EL1 and we have a VPIPT I-cache, + * then we must perform I-cache maintenance at EL2 in order for + * it to have an effect on the guest. Since the guest cannot hit + * I-cache lines allocated with a different VMID, we don't need + * to worry about junk out of guest reset (we nuke the I-cache on + * VMID rollover), but we do need to be careful when remapping + * executable pages for the same guest. This can happen when KSM + * takes a CoW fault on an executable page, copies the page into + * a page that was previously mapped in the guest and then needs + * to invalidate the guest view of the I-cache for that page + * from EL1. To solve this, we invalidate the entire I-cache when + * unmapping a page from a guest if we have a VPIPT I-cache but + * the host is running at EL1. As above, we could do better if + * we had the VA. + * + * The moral of this story is: if you have a VPIPT I-cache, then + * you should be running with VHE enabled. + */ + if (!has_vhe() && icache_is_vpipt()) + __flush_icache_all(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_vmid(struct kvm *kvm) +{ + struct tlb_inv_context cxt; + + dsb(ishst); + + /* Switch to requested VMID */ + kvm = kern_hyp_va(kvm); + __tlb_switch_to_guest(kvm, &cxt); + + __tlbi(vmalls12e1is); + dsb(ish); + isb(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +{ + struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); + struct tlb_inv_context cxt; + + /* Switch to requested VMID */ + __tlb_switch_to_guest(kvm, &cxt); + + __tlbi(vmalle1); + dsb(nsh); + isb(); + + __tlb_switch_to_host(kvm, &cxt); +} + +static inline void __hyp_text __tlb_flush_vm_context(void) +{ + dsb(ishst); + __tlbi(alle1is); + + /* + * VIPT and PIPT caches are not affected by VMID, so no maintenance + * is necessary across a VMID rollover. + * + * VPIPT caches constrain lookup and maintenance to the active VMID, + * so we need to invalidate lines with a stale VMID to avoid an ABA + * race after multiple rollovers. + * + */ + if (icache_is_vpipt()) + asm volatile("ic ialluis"); + + dsb(ish); +} + +#endif /* __ARM64_KVM_HYP_TLB_H__ */ From patchwork Fri May 15 10:58:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551217 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 08DFC618 for ; Fri, 15 May 2020 11:03:41 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C46F42074D for ; Fri, 15 May 2020 11:03:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="jObRXAqU"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="CvZ9C3Bc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C46F42074D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Z+U3qxhAFg1Q8fztaYy+KPjMfvmnsXHmRy3TwvyKYhQ=; b=jObRXAqUGIyhtU 2dnwkjs50j/g2cZsl40eXOGoh2lw3lvBe1yXILRQ7u6YAnFzI72h3YTlXC59CHB6LMVpjlZQ+8NMK 14/7WvVcGXjTS4MbqZXBjjc2734iIw61wL3TZ674C4q44Uh7lHblhpLcifyd9LU54iihObWH1l8PM pA42VceorhHRyAsSA3kXcmSDDso2fbCst8Ievg9CNUOGOa3n8NMXvD/FIMwfPbHoaK+l5xExjGpHq BdxdVlsFaRcpE9ns3We/p6ZZTrzbf4bHi2d6qFrsnxMYheYlkZRsjSWABfz5Q5DgC5NyrMFFhtuTR r9uBf/5lIVJ/fIBS895w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY7s-0000UG-RZ; Fri, 15 May 2020 11:03:37 +0000 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3b-0001wt-2q for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:27 +0000 Received: by mail-wr1-x444.google.com with SMTP id k13so916584wrx.3 for ; Fri, 15 May 2020 03:59:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=I9V1fV3J4Ad6ZtsvBO2DWIPOV8/aBzuB/ldgde/XmEA=; b=CvZ9C3Bc+cHk9/LZgRR3apJp4x1khVz4NFEjW/1n0tLi2KGDMZFRnm/Jiuvv/XaK9F Hy3sOVO8QPtnGJtLHSd8XKfRxT2ZwtSnyw5Z+d0qMkdew2+nSWId1uPXekMsi1uuL2pi o9bYhapFtV2npRcvvcPNpamdjW/QXX8P9iRaqbyLMwFxWgfwQA/Bc+PnCHMNfeMQlU+2 rhXZK3tHdW+5JBtwBvwegbdzToxjGSoowRy72ItlJTdWR9fBmiaEI0/vJVPRm25lH7y0 RWbQw1z8l2k+PL/eVO4KUttTKkyKDLORVFSD3tGNgFijSl2vjx5CgY/vRNcWh1jhHYj7 XgmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=I9V1fV3J4Ad6ZtsvBO2DWIPOV8/aBzuB/ldgde/XmEA=; b=B6P9JLnmobK1ib3qJZAYAKSIyFJy386dqVagIDRgogio55AilqN2/ZA9UOAqkNfK7o rvIsMFSXNvgqx2MKskCswD4gp/ou7d1Us97OuoZQomfUE9QYve3QrF2EAH9Ac51oZ1TY wEOPsT4flppXwk+y73v5mBAwK6iNR/5bGLlhdZk+UyG7NSMz3Iu3WrZsPpM+wccsb9w/ xfLZvVSXi7Eqcfk5/d3Ns+CoeXprdF/qdiGpScxlHXV4yHyIwNyaz9Qmo4XlLhU1xaqc PIlllLb7eKe9hI5PNJhnGainexT4PuSj4YTHe4XA8oMxRHek4nHn8y3IKc7mEDffrDBU cohw== X-Gm-Message-State: AOAM5301ia5+z89fB9D2PlorEH7ZM7XI0o9HBKP22oexSdMI/s3Pe13X AqogWUpQ37uVWRZEaOWWN6PPRg== X-Google-Smtp-Source: ABdhPJyaR5YV/B4QTGlvnU/kUHUt1gLXO72pZpHgQUOTyt2PcKf/BSP/zLadMEdkL/c3500eB3n46w== X-Received: by 2002:adf:9444:: with SMTP id 62mr3718029wrq.68.1589540347033; Fri, 15 May 2020 03:59:07 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id n9sm3085249wrv.43.2020.05.15.03.59.05 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:59:06 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 07/14] arm64: kvm: Split hyp/switch.c to VHE/nVHE Date: Fri, 15 May 2020 11:58:34 +0100 Message-Id: <20200515105841.73532-8-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035912_455264_187D5B5D X-CRM114-Status: GOOD ( 19.88 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:444 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. switch.c implements context-switching for KVM, with large parts shared between VHE/nVHE. These common routines are moved to switch.h, VHE-specific code is left in switch.c and nVHE-specific code is moved to nvhe/switch.c. Previously __kvm_vcpu_run needed a different symbol name for VHE/nVHE. This is cleaned up and the caller in arm.c simplified. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_asm.h | 4 +- arch/arm64/include/asm/kvm_host_hypercalls.h | 4 +- arch/arm64/include/asm/kvm_hyp.h | 5 + arch/arm64/kernel/image-vars.h | 25 +- arch/arm64/kvm/arm.c | 6 +- arch/arm64/kvm/hyp/hyp-entry.S | 2 + arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/switch.c | 271 ++++++++ arch/arm64/kvm/hyp/switch.c | 688 +------------------ arch/arm64/kvm/hyp/switch.h | 446 ++++++++++++ arch/arm64/kvm/hyp/sysreg-sr.c | 4 +- 11 files changed, 769 insertions(+), 688 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/switch.c create mode 100644 arch/arm64/kvm/hyp/switch.h diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 01242f54c48f..c0ba15c9b190 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -89,9 +89,7 @@ extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); -extern int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu); - -extern int __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu); +extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern u64 __vgic_v3_get_ich_vtr_el2(void); extern u64 __vgic_v3_read_vmcr(void); diff --git a/arch/arm64/include/asm/kvm_host_hypercalls.h b/arch/arm64/include/asm/kvm_host_hypercalls.h index ed02878fbda5..8aa9bbd05026 100644 --- a/arch/arm64/include/asm/kvm_host_hypercalls.h +++ b/arch/arm64/include/asm/kvm_host_hypercalls.h @@ -22,8 +22,8 @@ __KVM_HOST_HCALL(__kvm_tlb_flush_local_vmid) #define __KVM_HOST_HCALL_ID___kvm_flush_vm_context 4 __KVM_HOST_HCALL(__kvm_flush_vm_context) -#define __KVM_HOST_HCALL_ID___kvm_vcpu_run_nvhe 5 -__KVM_HOST_HCALL(__kvm_vcpu_run_nvhe) +#define __KVM_HOST_HCALL_ID___kvm_vcpu_run 5 +__KVM_HOST_HCALL(__kvm_vcpu_run) #define __KVM_HOST_HCALL_ID___kvm_tlb_flush_vmid 6 __KVM_HOST_HCALL(__kvm_tlb_flush_vmid) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index fe57f60f06a8..0f535692d1d8 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -82,11 +82,16 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu); void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); +#ifndef __KVM_NVHE_HYPERVISOR__ void activate_traps_vhe_load(struct kvm_vcpu *vcpu); void deactivate_traps_vhe_put(void); +#endif u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); + +#ifdef __KVM_NVHE_HYPERVISOR__ void __noreturn __hyp_do_panic(unsigned long, ...); +#endif /* * Must be called from hyp code running at EL2 with an updated VTTBR diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 7cafa0266847..f8d94190af80 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,18 +61,34 @@ __efistub__ctype = _ctype; * memory mappings. */ +__kvm_nvhe___debug_switch_to_guest = __debug_switch_to_guest; +__kvm_nvhe___debug_switch_to_host = __debug_switch_to_host; +__kvm_nvhe___fpsimd_restore_state = __fpsimd_restore_state; +__kvm_nvhe___fpsimd_save_state = __fpsimd_save_state; +__kvm_nvhe___guest_enter = __guest_enter; __kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___icache_flags = __icache_flags; __kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; __kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; __kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__kvm_nvhe___kvm_vcpu_run_nvhe = __kvm_vcpu_run_nvhe; +__kvm_nvhe___sysreg32_restore_state = __sysreg32_restore_state; +__kvm_nvhe___sysreg32_save_state = __sysreg32_save_state; +__kvm_nvhe___sysreg_restore_state_nvhe = __sysreg_restore_state_nvhe; +__kvm_nvhe___sysreg_save_state_nvhe = __sysreg_save_state_nvhe; +__kvm_nvhe___timer_disable_traps = __timer_disable_traps; +__kvm_nvhe___timer_enable_traps = __timer_enable_traps; +__kvm_nvhe___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; +__kvm_nvhe___vgic_v3_activate_traps = __vgic_v3_activate_traps; +__kvm_nvhe___vgic_v3_deactivate_traps = __vgic_v3_deactivate_traps; __kvm_nvhe___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; __kvm_nvhe___vgic_v3_init_lrs = __vgic_v3_init_lrs; +__kvm_nvhe___vgic_v3_perform_cpuif_access = __vgic_v3_perform_cpuif_access; __kvm_nvhe___vgic_v3_read_vmcr = __vgic_v3_read_vmcr; __kvm_nvhe___vgic_v3_restore_aprs = __vgic_v3_restore_aprs; +__kvm_nvhe___vgic_v3_restore_state = __vgic_v3_restore_state; __kvm_nvhe___vgic_v3_save_aprs = __vgic_v3_save_aprs; +__kvm_nvhe___vgic_v3_save_state = __vgic_v3_save_state; __kvm_nvhe___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; __kvm_nvhe_abort_guest_exit_end = abort_guest_exit_end; __kvm_nvhe_abort_guest_exit_start = abort_guest_exit_start; @@ -81,12 +97,17 @@ __kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; __kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; __kvm_nvhe_cpu_hwcap_keys = cpu_hwcap_keys; __kvm_nvhe_cpu_hwcaps = cpu_hwcaps; -__kvm_nvhe_hyp_panic = hyp_panic; __kvm_nvhe_kimage_voffset = kimage_voffset; __kvm_nvhe_kvm_host_data = kvm_host_data; __kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; +__kvm_nvhe_kvm_skip_instr32 = kvm_skip_instr32; __kvm_nvhe_kvm_update_va_mask = kvm_update_va_mask; +__kvm_nvhe_kvm_vgic_global_state = kvm_vgic_global_state; __kvm_nvhe_panic = panic; +__kvm_nvhe_sve_load_state = sve_load_state; +__kvm_nvhe_sve_save_state = sve_save_state; +__kvm_nvhe_vgic_v2_cpuif_trap = vgic_v2_cpuif_trap; +__kvm_nvhe_vgic_v3_cpuif_trap = vgic_v3_cpuif_trap; #endif /* CONFIG_KVM */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index c958bb37b769..dea249dc82b3 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -749,11 +749,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) trace_kvm_entry(*vcpu_pc(vcpu)); guest_enter_irqoff(); - if (has_vhe()) { - ret = kvm_vcpu_run_vhe(vcpu); - } else { - ret = kvm_call_hyp_ret(__kvm_vcpu_run_nvhe, vcpu); - } + ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu); vcpu->mode = OUTSIDE_GUEST_MODE; vcpu->stat.exits++; diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 81c65fa65183..7868f78b197a 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -194,6 +194,7 @@ el2_error: eret sb +#ifdef __KVM_NVHE_HYPERVISOR__ SYM_FUNC_START(__hyp_do_panic) mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\ PSR_MODE_EL1h) @@ -203,6 +204,7 @@ SYM_FUNC_START(__hyp_do_panic) eret sb SYM_FUNC_END(__hyp_do_panic) +#endif SYM_CODE_START(__hyp_panic) get_host_ctxt x0, x1 diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index bed7260097f5..bbfd9d27d742 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := tlb.o ../hyp-entry.o +obj-y := switch.o tlb.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c new file mode 100644 index 000000000000..4294beed3dc1 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -0,0 +1,271 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include +#include +#include +#include + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../switch.h" + +static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) +{ + u64 val; + + ___activate_traps(vcpu); + __activate_traps_common(vcpu); + + val = CPTR_EL2_DEFAULT; + val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM; + if (!update_fp_enabled(vcpu)) { + val |= CPTR_EL2_TFP; + __activate_traps_fpsimd32(vcpu); + } + + write_sysreg(val, cptr_el2); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { + struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; + + isb(); + /* + * At this stage, and thanks to the above isb(), S2 is + * configured and enabled. We can now restore the guest's S1 + * configuration: SCTLR, and only then TCR. + */ + write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); + isb(); + write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); + } +} + +static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) +{ + u64 mdcr_el2; + + ___deactivate_traps(vcpu); + + mdcr_el2 = read_sysreg(mdcr_el2); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { + u64 val; + + /* + * Set the TCR and SCTLR registers in the exact opposite + * sequence as __activate_traps (first prevent walks, + * then force the MMU on). A generous sprinkling of isb() + * ensure that things happen in this exact order. + */ + val = read_sysreg_el1(SYS_TCR); + write_sysreg_el1(val | TCR_EPD1_MASK | TCR_EPD0_MASK, SYS_TCR); + isb(); + val = read_sysreg_el1(SYS_SCTLR); + write_sysreg_el1(val | SCTLR_ELx_M, SYS_SCTLR); + isb(); + } + + __deactivate_traps_common(); + + mdcr_el2 &= MDCR_EL2_HPMN_MASK; + mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; + + write_sysreg(mdcr_el2, mdcr_el2); + write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); + write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); +} + +static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu) +{ + write_sysreg(0, vttbr_el2); +} + +/* Save VGICv3 state on non-VHE systems */ +static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) +{ + if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { + __vgic_v3_save_state(vcpu); + __vgic_v3_deactivate_traps(vcpu); + } +} + +/* Restore VGICv3 state on non_VEH systems */ +static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) +{ + if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { + __vgic_v3_activate_traps(vcpu); + __vgic_v3_restore_state(vcpu); + } +} + +/** + * Disable host events, enable guest events + */ +static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_host_data *host; + struct kvm_pmu_events *pmu; + + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + pmu = &host->pmu_events; + + if (pmu->events_host) + write_sysreg(pmu->events_host, pmcntenclr_el0); + + if (pmu->events_guest) + write_sysreg(pmu->events_guest, pmcntenset_el0); + + return (pmu->events_host || pmu->events_guest); +} + +/** + * Disable guest events, enable host events + */ +static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) +{ + struct kvm_host_data *host; + struct kvm_pmu_events *pmu; + + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); + pmu = &host->pmu_events; + + if (pmu->events_guest) + write_sysreg(pmu->events_guest, pmcntenclr_el0); + + if (pmu->events_host) + write_sysreg(pmu->events_host, pmcntenset_el0); +} + +/* Switch to the guest for legacy non-VHE systems */ +int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + bool pmu_switch_needed; + u64 exit_code; + + /* + * Having IRQs masked via PMR when entering the guest means the GIC + * will not signal the CPU of interrupts of lower priority, and the + * only way to get out will be via guest exceptions. + * Naturally, we want to avoid this. + */ + if (system_uses_irq_prio_masking()) { + gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); + pmr_sync(); + } + + vcpu = kern_hyp_va(vcpu); + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + host_ctxt->__hyp_running_vcpu = vcpu; + guest_ctxt = &vcpu->arch.ctxt; + + pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); + + __sysreg_save_state_nvhe(host_ctxt); + + /* + * We must restore the 32-bit state before the sysregs, thanks + * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). + * + * Also, and in order to be able to deal with erratum #1319537 (A57) + * and #1319367 (A72), we must ensure that all VM-related sysreg are + * restored before we enable S2 translation. + */ + __sysreg32_restore_state(vcpu); + __sysreg_restore_state_nvhe(guest_ctxt); + + __activate_vm(kern_hyp_va(vcpu->kvm)); + __activate_traps(vcpu); + + __hyp_vgic_restore_state(vcpu); + __timer_enable_traps(vcpu); + + __debug_switch_to_guest(vcpu); + + __set_guest_arch_workaround_state(vcpu); + + do { + /* Jump in the fire! */ + exit_code = __guest_enter(vcpu, host_ctxt); + + /* And we're baaack! */ + } while (fixup_guest_exit(vcpu, &exit_code)); + + __set_host_arch_workaround_state(vcpu); + + __sysreg_save_state_nvhe(guest_ctxt); + __sysreg32_save_state(vcpu); + __timer_disable_traps(vcpu); + __hyp_vgic_save_state(vcpu); + + __deactivate_traps(vcpu); + __deactivate_vm(vcpu); + + __sysreg_restore_state_nvhe(host_ctxt); + + if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) + __fpsimd_save_fpexc32(vcpu); + + /* + * This must come after restoring the host sysregs, since a non-VHE + * system may enable SPE here and make use of the TTBRs. + */ + __debug_switch_to_host(vcpu); + + if (pmu_switch_needed) + __pmu_switch_to_host(host_ctxt); + + /* Returning to host will clear PSR.I, remask PMR if needed */ + if (system_uses_irq_prio_masking()) + gic_write_pmr(GIC_PRIO_IRQOFF); + + return exit_code; +} + +void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) +{ + u64 spsr = read_sysreg_el2(SYS_SPSR); + u64 elr = read_sysreg_el2(SYS_ELR); + u64 par = read_sysreg(par_el1); + struct kvm_vcpu *vcpu = host_ctxt->__hyp_running_vcpu; + unsigned long str_va; + + if (read_sysreg(vttbr_el2)) { + __timer_disable_traps(vcpu); + __deactivate_traps(vcpu); + __deactivate_vm(vcpu); + __sysreg_restore_state_nvhe(host_ctxt); + } + + /* + * Force the panic string to be loaded from the literal pool, + * making sure it is a kernel address and not a PC-relative + * reference. + */ + asm volatile("ldr %0, =%1" : "=r" (str_va) : "S" (__hyp_panic_string)); + + __hyp_do_panic(str_va, + spsr, elr, + read_sysreg(esr_el2), read_sysreg_el2(SYS_FAR), + read_sysreg(hpfar_el2), par, vcpu); + unreachable(); +} diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 7a7c08029d81..1d03c5bf0b18 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -24,76 +24,14 @@ #include #include -/* Check whether the FP regs were dirtied while in the host-side run loop: */ -static bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) -{ - /* - * When the system doesn't support FP/SIMD, we cannot rely on - * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an - * abort on the very first access to FP and thus we should never - * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always - * trap the accesses. - */ - if (!system_supports_fpsimd() || - vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE) - vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | - KVM_ARM64_FP_HOST); - - return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED); -} - -/* Save the 32-bit only FPSIMD system register state */ -static void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) -{ - if (!vcpu_el1_is_32bit(vcpu)) - return; - - vcpu->arch.ctxt.sys_regs[FPEXC32_EL2] = read_sysreg(fpexc32_el2); -} - -static void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) -{ - /* - * We are about to set CPTR_EL2.TFP to trap all floating point - * register accesses to EL2, however, the ARM ARM clearly states that - * traps are only taken to EL2 if the operation would not otherwise - * trap to EL1. Therefore, always make sure that for 32-bit guests, - * we set FPEXC.EN to prevent traps to EL1, when setting the TFP bit. - * If FP/ASIMD is not implemented, FPEXC is UNDEFINED and any access to - * it will cause an exception. - */ - if (vcpu_el1_is_32bit(vcpu) && system_supports_fpsimd()) { - write_sysreg(1 << 30, fpexc32_el2); - isb(); - } -} - -static void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) -{ - /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ - write_sysreg(1 << 15, hstr_el2); - - /* - * Make sure we trap PMU access from EL0 to EL2. Also sanitize - * PMSELR_EL0 to make sure it never contains the cycle - * counter, which could make a PMXEVCNTR_EL0 access UNDEF at - * EL1 instead of being trapped to EL2. - */ - write_sysreg(0, pmselr_el0); - write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); - write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); -} - -static void __hyp_text __deactivate_traps_common(void) -{ - write_sysreg(0, hstr_el2); - write_sysreg(0, pmuserenr_el0); -} +#include "switch.h" -static void activate_traps_vhe(struct kvm_vcpu *vcpu) +static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; + ___activate_traps(vcpu); + val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; val &= ~CPACR_EL1_ZEN; @@ -121,59 +59,14 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu) write_sysreg(kvm_get_hyp_vector(), vbar_el1); } -NOKPROBE_SYMBOL(activate_traps_vhe); - -static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu) -{ - u64 val; - - __activate_traps_common(vcpu); - - val = CPTR_EL2_DEFAULT; - val |= CPTR_EL2_TTA | CPTR_EL2_TZ | CPTR_EL2_TAM; - if (!update_fp_enabled(vcpu)) { - val |= CPTR_EL2_TFP; - __activate_traps_fpsimd32(vcpu); - } - - write_sysreg(val, cptr_el2); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { - struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt; - - isb(); - /* - * At this stage, and thanks to the above isb(), S2 is - * configured and enabled. We can now restore the guest's S1 - * configuration: SCTLR, and only then TCR. - */ - write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); - isb(); - write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); - } -} +NOKPROBE_SYMBOL(__activate_traps); -static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) +static void __deactivate_traps(struct kvm_vcpu *vcpu) { - u64 hcr = vcpu->arch.hcr_el2; - - if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) - hcr |= HCR_TVM; - - write_sysreg(hcr, hcr_el2); - - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) - write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); + extern char vectors[]; /* kernel exception vectors */ - if (has_vhe()) - activate_traps_vhe(vcpu); - else - __activate_traps_nvhe(vcpu); -} + ___deactivate_traps(vcpu); -static void deactivate_traps_vhe(void) -{ - extern char vectors[]; /* kernel exception vectors */ write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); /* @@ -186,57 +79,7 @@ static void deactivate_traps_vhe(void) write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1); write_sysreg(vectors, vbar_el1); } -NOKPROBE_SYMBOL(deactivate_traps_vhe); - -static void __hyp_text __deactivate_traps_nvhe(void) -{ - u64 mdcr_el2 = read_sysreg(mdcr_el2); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { - u64 val; - - /* - * Set the TCR and SCTLR registers in the exact opposite - * sequence as __activate_traps_nvhe (first prevent walks, - * then force the MMU on). A generous sprinkling of isb() - * ensure that things happen in this exact order. - */ - val = read_sysreg_el1(SYS_TCR); - write_sysreg_el1(val | TCR_EPD1_MASK | TCR_EPD0_MASK, SYS_TCR); - isb(); - val = read_sysreg_el1(SYS_SCTLR); - write_sysreg_el1(val | SCTLR_ELx_M, SYS_SCTLR); - isb(); - } - - __deactivate_traps_common(); - - mdcr_el2 &= MDCR_EL2_HPMN_MASK; - mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT; - - write_sysreg(mdcr_el2, mdcr_el2); - write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2); - write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); -} - -static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) -{ - /* - * If we pended a virtual abort, preserve it until it gets - * cleared. See D1.14.3 (Virtual Interrupts) for details, but - * the crucial bit is "On taking a vSError interrupt, - * HCR_EL2.VSE is cleared to 0." - */ - if (vcpu->arch.hcr_el2 & HCR_VSE) { - vcpu->arch.hcr_el2 &= ~HCR_VSE; - vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE; - } - - if (has_vhe()) - deactivate_traps_vhe(); - else - __deactivate_traps_nvhe(); -} +NOKPROBE_SYMBOL(__deactivate_traps); void activate_traps_vhe_load(struct kvm_vcpu *vcpu) { @@ -256,385 +99,6 @@ void deactivate_traps_vhe_put(void) __deactivate_traps_common(); } -static void __hyp_text __activate_vm(struct kvm *kvm) -{ - __load_guest_stage2(kvm); -} - -static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu) -{ - write_sysreg(0, vttbr_el2); -} - -/* Save VGICv3 state on non-VHE systems */ -static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) -{ - if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { - __vgic_v3_save_state(vcpu); - __vgic_v3_deactivate_traps(vcpu); - } -} - -/* Restore VGICv3 state on non_VEH systems */ -static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) -{ - if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { - __vgic_v3_activate_traps(vcpu); - __vgic_v3_restore_state(vcpu); - } -} - -static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) -{ - u64 par, tmp; - - /* - * Resolve the IPA the hard way using the guest VA. - * - * Stage-1 translation already validated the memory access - * rights. As such, we can use the EL1 translation regime, and - * don't have to distinguish between EL0 and EL1 access. - * - * We do need to save/restore PAR_EL1 though, as we haven't - * saved the guest context yet, and we may return early... - */ - par = read_sysreg(par_el1); - asm volatile("at s1e1r, %0" : : "r" (far)); - isb(); - - tmp = read_sysreg(par_el1); - write_sysreg(par, par_el1); - - if (unlikely(tmp & SYS_PAR_EL1_F)) - return false; /* Translation failed, back to guest */ - - /* Convert PAR to HPFAR format */ - *hpfar = PAR_TO_HPFAR(tmp); - return true; -} - -static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) -{ - u8 ec; - u64 esr; - u64 hpfar, far; - - esr = vcpu->arch.fault.esr_el2; - ec = ESR_ELx_EC(esr); - - if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) - return true; - - far = read_sysreg_el2(SYS_FAR); - - /* - * The HPFAR can be invalid if the stage 2 fault did not - * happen during a stage 1 page table walk (the ESR_EL2.S1PTW - * bit is clear) and one of the two following cases are true: - * 1. The fault was due to a permission fault - * 2. The processor carries errata 834220 - * - * Therefore, for all non S1PTW faults where we either have a - * permission fault or the errata workaround is enabled, we - * resolve the IPA using the AT instruction. - */ - if (!(esr & ESR_ELx_S1PTW) && - (cpus_have_final_cap(ARM64_WORKAROUND_834220) || - (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) { - if (!__translate_far_to_hpfar(far, &hpfar)) - return false; - } else { - hpfar = read_sysreg(hpfar_el2); - } - - vcpu->arch.fault.far_el2 = far; - vcpu->arch.fault.hpfar_el2 = hpfar; - return true; -} - -/* Check for an FPSIMD/SVE trap and handle as appropriate */ -static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) -{ - bool vhe, sve_guest, sve_host; - u8 hsr_ec; - - if (!system_supports_fpsimd()) - return false; - - if (system_supports_sve()) { - sve_guest = vcpu_has_sve(vcpu); - sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE; - vhe = true; - } else { - sve_guest = false; - sve_host = false; - vhe = has_vhe(); - } - - hsr_ec = kvm_vcpu_trap_get_class(vcpu); - if (hsr_ec != ESR_ELx_EC_FP_ASIMD && - hsr_ec != ESR_ELx_EC_SVE) - return false; - - /* Don't handle SVE traps for non-SVE vcpus here: */ - if (!sve_guest) - if (hsr_ec != ESR_ELx_EC_FP_ASIMD) - return false; - - /* Valid trap. Switch the context: */ - - if (vhe) { - u64 reg = read_sysreg(cpacr_el1) | CPACR_EL1_FPEN; - - if (sve_guest) - reg |= CPACR_EL1_ZEN; - - write_sysreg(reg, cpacr_el1); - } else { - write_sysreg(read_sysreg(cptr_el2) & ~(u64)CPTR_EL2_TFP, - cptr_el2); - } - - isb(); - - if (vcpu->arch.flags & KVM_ARM64_FP_HOST) { - /* - * In the SVE case, VHE is assumed: it is enforced by - * Kconfig and kvm_arch_init(). - */ - if (sve_host) { - struct thread_struct *thread = container_of( - vcpu->arch.host_fpsimd_state, - struct thread_struct, uw.fpsimd_state); - - sve_save_state(sve_pffr(thread), - &vcpu->arch.host_fpsimd_state->fpsr); - } else { - __fpsimd_save_state(vcpu->arch.host_fpsimd_state); - } - - vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; - } - - if (sve_guest) { - sve_load_state(vcpu_sve_pffr(vcpu), - &vcpu->arch.ctxt.gp_regs.fp_regs.fpsr, - sve_vq_from_vl(vcpu->arch.sve_max_vl) - 1); - write_sysreg_s(vcpu->arch.ctxt.sys_regs[ZCR_EL1], SYS_ZCR_EL12); - } else { - __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); - } - - /* Skip restoring fpexc32 for AArch64 guests */ - if (!(read_sysreg(hcr_el2) & HCR_RW)) - write_sysreg(vcpu->arch.ctxt.sys_regs[FPEXC32_EL2], - fpexc32_el2); - - vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; - - return true; -} - -static bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) -{ - u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); - int rt = kvm_vcpu_sys_get_rt(vcpu); - u64 val = vcpu_get_reg(vcpu, rt); - - /* - * The normal sysreg handling code expects to see the traps, - * let's not do anything here. - */ - if (vcpu->arch.hcr_el2 & HCR_TVM) - return false; - - switch (sysreg) { - case SYS_SCTLR_EL1: - write_sysreg_el1(val, SYS_SCTLR); - break; - case SYS_TTBR0_EL1: - write_sysreg_el1(val, SYS_TTBR0); - break; - case SYS_TTBR1_EL1: - write_sysreg_el1(val, SYS_TTBR1); - break; - case SYS_TCR_EL1: - write_sysreg_el1(val, SYS_TCR); - break; - case SYS_ESR_EL1: - write_sysreg_el1(val, SYS_ESR); - break; - case SYS_FAR_EL1: - write_sysreg_el1(val, SYS_FAR); - break; - case SYS_AFSR0_EL1: - write_sysreg_el1(val, SYS_AFSR0); - break; - case SYS_AFSR1_EL1: - write_sysreg_el1(val, SYS_AFSR1); - break; - case SYS_MAIR_EL1: - write_sysreg_el1(val, SYS_MAIR); - break; - case SYS_AMAIR_EL1: - write_sysreg_el1(val, SYS_AMAIR); - break; - case SYS_CONTEXTIDR_EL1: - write_sysreg_el1(val, SYS_CONTEXTIDR); - break; - default: - return false; - } - - __kvm_skip_instr(vcpu); - return true; -} - -/* - * Return true when we were able to fixup the guest exit and should return to - * the guest, false when we should restore the host state and return to the - * main run loop. - */ -static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) -{ - if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) - vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); - - /* - * We're using the raw exception code in order to only process - * the trap if no SError is pending. We will come back to the - * same PC once the SError has been injected, and replay the - * trapping instruction. - */ - if (*exit_code != ARM_EXCEPTION_TRAP) - goto exit; - - if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && - kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 && - handle_tx2_tvm(vcpu)) - return true; - - /* - * We trap the first access to the FP/SIMD to save the host context - * and restore the guest context lazily. - * If FP/SIMD is not implemented, handle the trap and inject an - * undefined instruction exception to the guest. - * Similarly for trapped SVE accesses. - */ - if (__hyp_handle_fpsimd(vcpu)) - return true; - - if (!__populate_fault_info(vcpu)) - return true; - - if (static_branch_unlikely(&vgic_v2_cpuif_trap)) { - bool valid; - - valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW && - kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT && - kvm_vcpu_dabt_isvalid(vcpu) && - !kvm_vcpu_dabt_isextabt(vcpu) && - !kvm_vcpu_dabt_iss1tw(vcpu); - - if (valid) { - int ret = __vgic_v2_perform_cpuif_access(vcpu); - - if (ret == 1) - return true; - - /* Promote an illegal access to an SError.*/ - if (ret == -1) - *exit_code = ARM_EXCEPTION_EL1_SERROR; - - goto exit; - } - } - - if (static_branch_unlikely(&vgic_v3_cpuif_trap) && - (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || - kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { - int ret = __vgic_v3_perform_cpuif_access(vcpu); - - if (ret == 1) - return true; - } - -exit: - /* Return to the host kernel and handle the exit */ - return false; -} - -static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) -{ - if (!cpus_have_final_cap(ARM64_SSBD)) - return false; - - return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG); -} - -static void __hyp_text __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) -{ -#ifdef CONFIG_ARM64_SSBD - /* - * The host runs with the workaround always present. If the - * guest wants it disabled, so be it... - */ - if (__needs_ssbd_off(vcpu) && - __hyp_this_cpu_read(arm64_ssbd_callback_required)) - arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL); -#endif -} - -static void __hyp_text __set_host_arch_workaround_state(struct kvm_vcpu *vcpu) -{ -#ifdef CONFIG_ARM64_SSBD - /* - * If the guest has disabled the workaround, bring it back on. - */ - if (__needs_ssbd_off(vcpu) && - __hyp_this_cpu_read(arm64_ssbd_callback_required)) - arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL); -#endif -} - -/** - * Disable host events, enable guest events - */ -static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) -{ - struct kvm_host_data *host; - struct kvm_pmu_events *pmu; - - host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); - pmu = &host->pmu_events; - - if (pmu->events_host) - write_sysreg(pmu->events_host, pmcntenclr_el0); - - if (pmu->events_guest) - write_sysreg(pmu->events_guest, pmcntenset_el0); - - return (pmu->events_host || pmu->events_guest); -} - -/** - * Disable guest events, enable host events - */ -static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) -{ - struct kvm_host_data *host; - struct kvm_pmu_events *pmu; - - host = container_of(host_ctxt, struct kvm_host_data, host_ctxt); - pmu = &host->pmu_events; - - if (pmu->events_guest) - write_sysreg(pmu->events_guest, pmcntenclr_el0); - - if (pmu->events_host) - write_sysreg(pmu->events_host, pmcntenset_el0); -} - /* Switch to the guest for VHE systems running in EL2 */ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { @@ -691,7 +155,7 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) } NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe); -int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) +int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { int ret; @@ -726,126 +190,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) return ret; } -/* Switch to the guest for legacy non-VHE systems */ -int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) -{ - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - bool pmu_switch_needed; - u64 exit_code; - - /* - * Having IRQs masked via PMR when entering the guest means the GIC - * will not signal the CPU of interrupts of lower priority, and the - * only way to get out will be via guest exceptions. - * Naturally, we want to avoid this. - */ - if (system_uses_irq_prio_masking()) { - gic_write_pmr(GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET); - pmr_sync(); - } - - vcpu = kern_hyp_va(vcpu); - - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - host_ctxt->__hyp_running_vcpu = vcpu; - guest_ctxt = &vcpu->arch.ctxt; - - pmu_switch_needed = __pmu_switch_to_guest(host_ctxt); - - __sysreg_save_state_nvhe(host_ctxt); - - /* - * We must restore the 32-bit state before the sysregs, thanks - * to erratum #852523 (Cortex-A57) or #853709 (Cortex-A72). - * - * Also, and in order to be able to deal with erratum #1319537 (A57) - * and #1319367 (A72), we must ensure that all VM-related sysreg are - * restored before we enable S2 translation. - */ - __sysreg32_restore_state(vcpu); - __sysreg_restore_state_nvhe(guest_ctxt); - - __activate_vm(kern_hyp_va(vcpu->kvm)); - __activate_traps(vcpu); - - __hyp_vgic_restore_state(vcpu); - __timer_enable_traps(vcpu); - - __debug_switch_to_guest(vcpu); - - __set_guest_arch_workaround_state(vcpu); - - do { - /* Jump in the fire! */ - exit_code = __guest_enter(vcpu, host_ctxt); - - /* And we're baaack! */ - } while (fixup_guest_exit(vcpu, &exit_code)); - - __set_host_arch_workaround_state(vcpu); - - __sysreg_save_state_nvhe(guest_ctxt); - __sysreg32_save_state(vcpu); - __timer_disable_traps(vcpu); - __hyp_vgic_save_state(vcpu); - - __deactivate_traps(vcpu); - __deactivate_vm(vcpu); - - __sysreg_restore_state_nvhe(host_ctxt); - - if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) - __fpsimd_save_fpexc32(vcpu); - - /* - * This must come after restoring the host sysregs, since a non-VHE - * system may enable SPE here and make use of the TTBRs. - */ - __debug_switch_to_host(vcpu); - - if (pmu_switch_needed) - __pmu_switch_to_host(host_ctxt); - - /* Returning to host will clear PSR.I, remask PMR if needed */ - if (system_uses_irq_prio_masking()) - gic_write_pmr(GIC_PRIO_IRQOFF); - - return exit_code; -} - -static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n"; - -static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par, - struct kvm_cpu_context *__host_ctxt) -{ - struct kvm_vcpu *vcpu; - unsigned long str_va; - - vcpu = __host_ctxt->__hyp_running_vcpu; - - if (read_sysreg(vttbr_el2)) { - __timer_disable_traps(vcpu); - __deactivate_traps(vcpu); - __deactivate_vm(vcpu); - __sysreg_restore_state_nvhe(__host_ctxt); - } - - /* - * Force the panic string to be loaded from the literal pool, - * making sure it is a kernel address and not a PC-relative - * reference. - */ - asm volatile("ldr %0, =%1" : "=r" (str_va) : "S" (__hyp_panic_string)); - - __hyp_do_panic(str_va, - spsr, elr, - read_sysreg(esr_el2), read_sysreg_el2(SYS_FAR), - read_sysreg(hpfar_el2), par, vcpu); -} - -static void __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par, - struct kvm_cpu_context *host_ctxt) +static void __hyp_call_panic(u64 spsr, u64 elr, u64 par, + struct kvm_cpu_context *host_ctxt) { struct kvm_vcpu *vcpu; vcpu = host_ctxt->__hyp_running_vcpu; @@ -858,18 +204,14 @@ static void __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par, read_sysreg_el2(SYS_ESR), read_sysreg_el2(SYS_FAR), read_sysreg(hpfar_el2), par, vcpu); } -NOKPROBE_SYMBOL(__hyp_call_panic_vhe); +NOKPROBE_SYMBOL(__hyp_call_panic); -void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) +void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) { u64 spsr = read_sysreg_el2(SYS_SPSR); u64 elr = read_sysreg_el2(SYS_ELR); u64 par = read_sysreg(par_el1); - if (!has_vhe()) - __hyp_call_panic_nvhe(spsr, elr, par, host_ctxt); - else - __hyp_call_panic_vhe(spsr, elr, par, host_ctxt); - + __hyp_call_panic(spsr, elr, par, host_ctxt); unreachable(); } diff --git a/arch/arm64/kvm/hyp/switch.h b/arch/arm64/kvm/hyp/switch.h new file mode 100644 index 000000000000..0ce8185e26db --- /dev/null +++ b/arch/arm64/kvm/hyp/switch.h @@ -0,0 +1,446 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#ifndef __ARM64_KVM_HYP_SWITCH_H__ +#define __ARM64_KVM_HYP_SWITCH_H__ + +#include +#include +#include +#include +#include + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n"; + +/* Check whether the FP regs were dirtied while in the host-side run loop: */ +static inline bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) +{ + /* + * When the system doesn't support FP/SIMD, we cannot rely on + * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an + * abort on the very first access to FP and thus we should never + * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always + * trap the accesses. + */ + if (!system_supports_fpsimd() || + vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE) + vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | + KVM_ARM64_FP_HOST); + + return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED); +} + +/* Save the 32-bit only FPSIMD system register state */ +static inline void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) +{ + if (!vcpu_el1_is_32bit(vcpu)) + return; + + vcpu->arch.ctxt.sys_regs[FPEXC32_EL2] = read_sysreg(fpexc32_el2); +} + +static inline void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) +{ + /* + * We are about to set CPTR_EL2.TFP to trap all floating point + * register accesses to EL2, however, the ARM ARM clearly states that + * traps are only taken to EL2 if the operation would not otherwise + * trap to EL1. Therefore, always make sure that for 32-bit guests, + * we set FPEXC.EN to prevent traps to EL1, when setting the TFP bit. + * If FP/ASIMD is not implemented, FPEXC is UNDEFINED and any access to + * it will cause an exception. + */ + if (vcpu_el1_is_32bit(vcpu) && system_supports_fpsimd()) { + write_sysreg(1 << 30, fpexc32_el2); + isb(); + } +} + +static inline void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) +{ + /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ + write_sysreg(1 << 15, hstr_el2); + + /* + * Make sure we trap PMU access from EL0 to EL2. Also sanitize + * PMSELR_EL0 to make sure it never contains the cycle + * counter, which could make a PMXEVCNTR_EL0 access UNDEF at + * EL1 instead of being trapped to EL2. + */ + write_sysreg(0, pmselr_el0); + write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); + write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); +} + +static inline void __hyp_text __deactivate_traps_common(void) +{ + write_sysreg(0, hstr_el2); + write_sysreg(0, pmuserenr_el0); +} + +static inline void __hyp_text ___activate_traps(struct kvm_vcpu *vcpu) +{ + u64 hcr = vcpu->arch.hcr_el2; + + if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM)) + hcr |= HCR_TVM; + + write_sysreg(hcr, hcr_el2); + + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && (hcr & HCR_VSE)) + write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); +} + +static inline void __hyp_text ___deactivate_traps(struct kvm_vcpu *vcpu) +{ + /* + * If we pended a virtual abort, preserve it until it gets + * cleared. See D1.14.3 (Virtual Interrupts) for details, but + * the crucial bit is "On taking a vSError interrupt, + * HCR_EL2.VSE is cleared to 0." + */ + if (vcpu->arch.hcr_el2 & HCR_VSE) { + vcpu->arch.hcr_el2 &= ~HCR_VSE; + vcpu->arch.hcr_el2 |= read_sysreg(hcr_el2) & HCR_VSE; + } +} + +static inline void __hyp_text __activate_vm(struct kvm *kvm) +{ + __load_guest_stage2(kvm); +} + +static inline bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) +{ + u64 par, tmp; + + /* + * Resolve the IPA the hard way using the guest VA. + * + * Stage-1 translation already validated the memory access + * rights. As such, we can use the EL1 translation regime, and + * don't have to distinguish between EL0 and EL1 access. + * + * We do need to save/restore PAR_EL1 though, as we haven't + * saved the guest context yet, and we may return early... + */ + par = read_sysreg(par_el1); + asm volatile("at s1e1r, %0" : : "r" (far)); + isb(); + + tmp = read_sysreg(par_el1); + write_sysreg(par, par_el1); + + if (unlikely(tmp & SYS_PAR_EL1_F)) + return false; /* Translation failed, back to guest */ + + /* Convert PAR to HPFAR format */ + *hpfar = PAR_TO_HPFAR(tmp); + return true; +} + +static inline bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) +{ + u8 ec; + u64 esr; + u64 hpfar, far; + + esr = vcpu->arch.fault.esr_el2; + ec = ESR_ELx_EC(esr); + + if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW) + return true; + + far = read_sysreg_el2(SYS_FAR); + + /* + * The HPFAR can be invalid if the stage 2 fault did not + * happen during a stage 1 page table walk (the ESR_EL2.S1PTW + * bit is clear) and one of the two following cases are true: + * 1. The fault was due to a permission fault + * 2. The processor carries errata 834220 + * + * Therefore, for all non S1PTW faults where we either have a + * permission fault or the errata workaround is enabled, we + * resolve the IPA using the AT instruction. + */ + if (!(esr & ESR_ELx_S1PTW) && + (cpus_have_final_cap(ARM64_WORKAROUND_834220) || + (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) { + if (!__translate_far_to_hpfar(far, &hpfar)) + return false; + } else { + hpfar = read_sysreg(hpfar_el2); + } + + vcpu->arch.fault.far_el2 = far; + vcpu->arch.fault.hpfar_el2 = hpfar; + return true; +} + +/* Check for an FPSIMD/SVE trap and handle as appropriate */ +static inline bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) +{ + bool vhe, sve_guest, sve_host; + u8 hsr_ec; + + if (!system_supports_fpsimd()) + return false; + + if (system_supports_sve()) { + sve_guest = vcpu_has_sve(vcpu); + sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE; + vhe = true; + } else { + sve_guest = false; + sve_host = false; + vhe = has_vhe(); + } + + hsr_ec = kvm_vcpu_trap_get_class(vcpu); + if (hsr_ec != ESR_ELx_EC_FP_ASIMD && + hsr_ec != ESR_ELx_EC_SVE) + return false; + + /* Don't handle SVE traps for non-SVE vcpus here: */ + if (!sve_guest) + if (hsr_ec != ESR_ELx_EC_FP_ASIMD) + return false; + + /* Valid trap. Switch the context: */ + + if (vhe) { + u64 reg = read_sysreg(cpacr_el1) | CPACR_EL1_FPEN; + + if (sve_guest) + reg |= CPACR_EL1_ZEN; + + write_sysreg(reg, cpacr_el1); + } else { + write_sysreg(read_sysreg(cptr_el2) & ~(u64)CPTR_EL2_TFP, + cptr_el2); + } + + isb(); + + if (vcpu->arch.flags & KVM_ARM64_FP_HOST) { + /* + * In the SVE case, VHE is assumed: it is enforced by + * Kconfig and kvm_arch_init(). + */ + if (sve_host) { + struct thread_struct *thread = container_of( + vcpu->arch.host_fpsimd_state, + struct thread_struct, uw.fpsimd_state); + + sve_save_state(sve_pffr(thread), + &vcpu->arch.host_fpsimd_state->fpsr); + } else { + __fpsimd_save_state(vcpu->arch.host_fpsimd_state); + } + + vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; + } + + if (sve_guest) { + sve_load_state(vcpu_sve_pffr(vcpu), + &vcpu->arch.ctxt.gp_regs.fp_regs.fpsr, + sve_vq_from_vl(vcpu->arch.sve_max_vl) - 1); + write_sysreg_s(vcpu->arch.ctxt.sys_regs[ZCR_EL1], SYS_ZCR_EL12); + } else { + __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); + } + + /* Skip restoring fpexc32 for AArch64 guests */ + if (!(read_sysreg(hcr_el2) & HCR_RW)) + write_sysreg(vcpu->arch.ctxt.sys_regs[FPEXC32_EL2], + fpexc32_el2); + + vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; + + return true; +} + +static inline bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) +{ + u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); + int rt = kvm_vcpu_sys_get_rt(vcpu); + u64 val = vcpu_get_reg(vcpu, rt); + + /* + * The normal sysreg handling code expects to see the traps, + * let's not do anything here. + */ + if (vcpu->arch.hcr_el2 & HCR_TVM) + return false; + + switch (sysreg) { + case SYS_SCTLR_EL1: + write_sysreg_el1(val, SYS_SCTLR); + break; + case SYS_TTBR0_EL1: + write_sysreg_el1(val, SYS_TTBR0); + break; + case SYS_TTBR1_EL1: + write_sysreg_el1(val, SYS_TTBR1); + break; + case SYS_TCR_EL1: + write_sysreg_el1(val, SYS_TCR); + break; + case SYS_ESR_EL1: + write_sysreg_el1(val, SYS_ESR); + break; + case SYS_FAR_EL1: + write_sysreg_el1(val, SYS_FAR); + break; + case SYS_AFSR0_EL1: + write_sysreg_el1(val, SYS_AFSR0); + break; + case SYS_AFSR1_EL1: + write_sysreg_el1(val, SYS_AFSR1); + break; + case SYS_MAIR_EL1: + write_sysreg_el1(val, SYS_MAIR); + break; + case SYS_AMAIR_EL1: + write_sysreg_el1(val, SYS_AMAIR); + break; + case SYS_CONTEXTIDR_EL1: + write_sysreg_el1(val, SYS_CONTEXTIDR); + break; + default: + return false; + } + + __kvm_skip_instr(vcpu); + return true; +} + +/* + * Return true when we were able to fixup the guest exit and should return to + * the guest, false when we should restore the host state and return to the + * main run loop. + */ +static inline bool __hyp_text +fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +{ + if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) + vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); + + /* + * We're using the raw exception code in order to only process + * the trap if no SError is pending. We will come back to the + * same PC once the SError has been injected, and replay the + * trapping instruction. + */ + if (*exit_code != ARM_EXCEPTION_TRAP) + goto exit; + + if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) && + kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 && + handle_tx2_tvm(vcpu)) + return true; + + /* + * We trap the first access to the FP/SIMD to save the host context + * and restore the guest context lazily. + * If FP/SIMD is not implemented, handle the trap and inject an + * undefined instruction exception to the guest. + * Similarly for trapped SVE accesses. + */ + if (__hyp_handle_fpsimd(vcpu)) + return true; + + if (!__populate_fault_info(vcpu)) + return true; + + if (static_branch_unlikely(&vgic_v2_cpuif_trap)) { + bool valid; + + valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW && + kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT && + kvm_vcpu_dabt_isvalid(vcpu) && + !kvm_vcpu_dabt_isextabt(vcpu) && + !kvm_vcpu_dabt_iss1tw(vcpu); + + if (valid) { + int ret = __vgic_v2_perform_cpuif_access(vcpu); + + if (ret == 1) + return true; + + /* Promote an illegal access to an SError.*/ + if (ret == -1) + *exit_code = ARM_EXCEPTION_EL1_SERROR; + + goto exit; + } + } + + if (static_branch_unlikely(&vgic_v3_cpuif_trap) && + (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || + kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { + int ret = __vgic_v3_perform_cpuif_access(vcpu); + + if (ret == 1) + return true; + } + +exit: + /* Return to the host kernel and handle the exit */ + return false; +} + +static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) +{ + if (!cpus_have_final_cap(ARM64_SSBD)) + return false; + + return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG); +} + +static inline void __hyp_text +__set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) +{ +#ifdef CONFIG_ARM64_SSBD + /* + * The host runs with the workaround always present. If the + * guest wants it disabled, so be it... + */ + if (__needs_ssbd_off(vcpu) && + __hyp_this_cpu_read(arm64_ssbd_callback_required)) + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 0, NULL); +#endif +} + +static inline void __hyp_text +__set_host_arch_workaround_state(struct kvm_vcpu *vcpu) +{ +#ifdef CONFIG_ARM64_SSBD + /* + * If the guest has disabled the workaround, bring it back on. + */ + if (__needs_ssbd_off(vcpu) && + __hyp_this_cpu_read(arm64_ssbd_callback_required)) + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_2, 1, NULL); +#endif +} + +#endif /* __ARM64_KVM_HYP_SWITCH_H__ */ diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c index 75b1925763f1..7a261ace2405 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/sysreg-sr.c @@ -125,7 +125,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) /* * Must only be done for guest registers, hence the context * test. We're coming from the host, so SCTLR.M is already - * set. Pairs with __activate_traps_nvhe(). + * set. Pairs with nVHE's __activate_traps(). */ write_sysreg_el1((ctxt->sys_regs[TCR_EL1] | TCR_EPD1_MASK | TCR_EPD0_MASK), @@ -153,7 +153,7 @@ static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) ctxt->__hyp_running_vcpu) { /* * Must only be done for host registers, hence the context - * test. Pairs with __deactivate_traps_nvhe(). + * test. Pairs with nVHE's __deactivate_traps(). */ isb(); /* From patchwork Fri May 15 10:58:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551207 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63A83913 for ; Fri, 15 May 2020 11:01:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3D7A62074D for ; Fri, 15 May 2020 11:01:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="h8cv4XdI"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="epUHLrxn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D7A62074D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KbahhI/OoaMXZ9YG1CJF0SMiXjUzOF3ZHWjdsZWVtXY=; b=h8cv4XdIKh9KLJ yg26s2bJIYFLZH/jIxeFKpjU/Rwhx1+u/w5LYnWqgLYiWvi8D5IWAGE00/amFtjZmO5dzQUq2pATL 2lkt0vo/uPpDZmtHsyqIE03csij60yL/aBcZEq6N8Zn5m6M7ASX7lTpWVfEShgrIFxHKVjhz0iA8A lR1oKAwHqOOSu9+fya5yJ+WFbqoQ4d0MXj4DVg9qp4wN2K9o563CLLLaq2o/W0EunECITokkrahbq zOkuNB5v31fafl3zjYNmTWyqWi36hhNru+Rt507ePVrqwrziG7AmUdC13lW+QtM9BSYZDavJrW63N JMBm32qRQjzMeRuiVgcA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY5z-0007I4-VC; Fri, 15 May 2020 11:01:39 +0000 Received: from mail-wr1-x442.google.com ([2a00:1450:4864:20::442]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3b-0001xU-PA for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:18 +0000 Received: by mail-wr1-x442.google.com with SMTP id j5so3052144wrq.2 for ; Fri, 15 May 2020 03:59:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BE9nyaoIVsy0Uu4Bb4/XGauSOljVERYPioc3Z6j5llE=; b=epUHLrxnN08kuUW6lif5uKugTZk0Ai/VIfR4EIYIQfNJJN0K9msHRKQ5i60Zenqicr xqvAr4WMl3pGig+CWO/4afBDyGepV/MGYpXKmvVJNgdCFhqfse3d5oT+J5mlRf+kJCJU p2PM4DVDBaLNqC4mxZj9Y9gK5ykdV+ABfXYC1QkdGJF4oco8afENi2mSJ8dVZMfW+Ol1 VYZQLbSoBQXKYw+ki8ENVNTdphn30KsIwFA9bzoEYwvE6jUstNNykLIedsL4UfeCCvDn rhra3aSnv852lvTVZd6cxokJNXLSnfDdTxF+aKBsMoX89UtaSXHn6/aZ7R2aJUeDqSML 7c8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BE9nyaoIVsy0Uu4Bb4/XGauSOljVERYPioc3Z6j5llE=; b=XVI4FilnoK5GftN9dZ/U+4BvxHMTzUi+edOQnn38i3fHYZOAs6zNSPlbP0Szuq/ubM HBx3G2xkJDZ+BPcfGf2CAab2OEZ8L3R27QNRphtLXOpG0khTKJuw40paiE0J/hQ1AGyt 1aFzthl47gHZdYlwz6qxQNn1BtUQg5dXAGhimkwf439NCfchhtuydXEhXjQWrtxbSjJU RzpgewWUTZr45PhVeabO/JyAKm6ULR8F65Nf06forC8bFPpskp1u0w+8wjNHXFYIXN9Y HQT5rliJm4xuQJtea+1lbpedz3Y41CFYM8gpBvXMM/8atFiCtiO9Mxqx8uDarDFPBM0V LQhQ== X-Gm-Message-State: AOAM532pK6+kSPAGb3sGbw1PsPi8lsUBElLegkRj6zq2XNaVvV5Fc1l3 UV15wkQPjmRQEe079FpIBKuPfQ== X-Google-Smtp-Source: ABdhPJyunXb0BnPE3U8JLRPzsnbIpHBh+Pal1ueJlmJdY5FA0sOnlFUxb+pZcb1Day+tyy4FjstVPg== X-Received: by 2002:adf:f9c1:: with SMTP id w1mr3873290wrr.342.1589540349741; Fri, 15 May 2020 03:59:09 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id t71sm3101615wmt.31.2020.05.15.03.59.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:59:09 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 08/14] arm64: kvm: Split hyp/debug-sr.c to VHE/nVHE Date: Fri, 15 May 2020 11:58:35 +0100 Message-Id: <20200515105841.73532-9-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035912_440331_F8367C15 X-CRM114-Status: GOOD ( 19.85 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:442 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. debug-sr.c contains KVM's code for context-switching debug registers, with some parts shared between VHE/nVHE. These common routines are moved to debug-sr.h, VHE-specific code is left in debug-sr.c and nVHE-specific code is moved to nvhe/debug-sr.c. Functions are slightly refactored to move code hidden behind `has_vhe()` checks to the corresponding .c files. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 3 - arch/arm64/kvm/hyp/debug-sr.c | 214 +---------------------------- arch/arm64/kvm/hyp/debug-sr.h | 172 +++++++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 77 +++++++++++ 5 files changed, 256 insertions(+), 212 deletions(-) create mode 100644 arch/arm64/kvm/hyp/debug-sr.h create mode 100644 arch/arm64/kvm/hyp/nvhe/debug-sr.c diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index f8d94190af80..5de3a5998bcd 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,15 +61,12 @@ __efistub__ctype = _ctype; * memory mappings. */ -__kvm_nvhe___debug_switch_to_guest = __debug_switch_to_guest; -__kvm_nvhe___debug_switch_to_host = __debug_switch_to_host; __kvm_nvhe___fpsimd_restore_state = __fpsimd_restore_state; __kvm_nvhe___fpsimd_save_state = __fpsimd_save_state; __kvm_nvhe___guest_enter = __guest_enter; __kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___icache_flags = __icache_flags; __kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; -__kvm_nvhe___kvm_get_mdcr_el2 = __kvm_get_mdcr_el2; __kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; __kvm_nvhe___sysreg32_restore_state = __sysreg32_restore_state; diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c index 0fc9872a1467..39e0b9bcc8b7 100644 --- a/arch/arm64/kvm/hyp/debug-sr.c +++ b/arch/arm64/kvm/hyp/debug-sr.c @@ -4,221 +4,19 @@ * Author: Marc Zyngier */ -#include -#include +#include "debug-sr.h" -#include -#include -#include -#include - -#define read_debug(r,n) read_sysreg(r##n##_el1) -#define write_debug(v,r,n) write_sysreg(v, r##n##_el1) - -#define save_debug(ptr,reg,nr) \ - switch (nr) { \ - case 15: ptr[15] = read_debug(reg, 15); \ - /* Fall through */ \ - case 14: ptr[14] = read_debug(reg, 14); \ - /* Fall through */ \ - case 13: ptr[13] = read_debug(reg, 13); \ - /* Fall through */ \ - case 12: ptr[12] = read_debug(reg, 12); \ - /* Fall through */ \ - case 11: ptr[11] = read_debug(reg, 11); \ - /* Fall through */ \ - case 10: ptr[10] = read_debug(reg, 10); \ - /* Fall through */ \ - case 9: ptr[9] = read_debug(reg, 9); \ - /* Fall through */ \ - case 8: ptr[8] = read_debug(reg, 8); \ - /* Fall through */ \ - case 7: ptr[7] = read_debug(reg, 7); \ - /* Fall through */ \ - case 6: ptr[6] = read_debug(reg, 6); \ - /* Fall through */ \ - case 5: ptr[5] = read_debug(reg, 5); \ - /* Fall through */ \ - case 4: ptr[4] = read_debug(reg, 4); \ - /* Fall through */ \ - case 3: ptr[3] = read_debug(reg, 3); \ - /* Fall through */ \ - case 2: ptr[2] = read_debug(reg, 2); \ - /* Fall through */ \ - case 1: ptr[1] = read_debug(reg, 1); \ - /* Fall through */ \ - default: ptr[0] = read_debug(reg, 0); \ - } - -#define restore_debug(ptr,reg,nr) \ - switch (nr) { \ - case 15: write_debug(ptr[15], reg, 15); \ - /* Fall through */ \ - case 14: write_debug(ptr[14], reg, 14); \ - /* Fall through */ \ - case 13: write_debug(ptr[13], reg, 13); \ - /* Fall through */ \ - case 12: write_debug(ptr[12], reg, 12); \ - /* Fall through */ \ - case 11: write_debug(ptr[11], reg, 11); \ - /* Fall through */ \ - case 10: write_debug(ptr[10], reg, 10); \ - /* Fall through */ \ - case 9: write_debug(ptr[9], reg, 9); \ - /* Fall through */ \ - case 8: write_debug(ptr[8], reg, 8); \ - /* Fall through */ \ - case 7: write_debug(ptr[7], reg, 7); \ - /* Fall through */ \ - case 6: write_debug(ptr[6], reg, 6); \ - /* Fall through */ \ - case 5: write_debug(ptr[5], reg, 5); \ - /* Fall through */ \ - case 4: write_debug(ptr[4], reg, 4); \ - /* Fall through */ \ - case 3: write_debug(ptr[3], reg, 3); \ - /* Fall through */ \ - case 2: write_debug(ptr[2], reg, 2); \ - /* Fall through */ \ - case 1: write_debug(ptr[1], reg, 1); \ - /* Fall through */ \ - default: write_debug(ptr[0], reg, 0); \ - } - -static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1) -{ - u64 reg; - - /* Clear pmscr in case of early return */ - *pmscr_el1 = 0; - - /* SPE present on this CPU? */ - if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1), - ID_AA64DFR0_PMSVER_SHIFT)) - return; - - /* Yes; is it owned by EL3? */ - reg = read_sysreg_s(SYS_PMBIDR_EL1); - if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT)) - return; - - /* No; is the host actually using the thing? */ - reg = read_sysreg_s(SYS_PMBLIMITR_EL1); - if (!(reg & BIT(SYS_PMBLIMITR_EL1_E_SHIFT))) - return; - - /* Yes; save the control register and disable data generation */ - *pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1); - write_sysreg_s(0, SYS_PMSCR_EL1); - isb(); - - /* Now drain all buffered data to memory */ - psb_csync(); - dsb(nsh); -} - -static void __hyp_text __debug_restore_spe_nvhe(u64 pmscr_el1) -{ - if (!pmscr_el1) - return; - - /* The host page table is installed, but not yet synchronised */ - isb(); - - /* Re-enable data generation */ - write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1); -} - -static void __hyp_text __debug_save_state(struct kvm_vcpu *vcpu, - struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) -{ - u64 aa64dfr0; - int brps, wrps; - - aa64dfr0 = read_sysreg(id_aa64dfr0_el1); - brps = (aa64dfr0 >> 12) & 0xf; - wrps = (aa64dfr0 >> 20) & 0xf; - - save_debug(dbg->dbg_bcr, dbgbcr, brps); - save_debug(dbg->dbg_bvr, dbgbvr, brps); - save_debug(dbg->dbg_wcr, dbgwcr, wrps); - save_debug(dbg->dbg_wvr, dbgwvr, wrps); - - ctxt->sys_regs[MDCCINT_EL1] = read_sysreg(mdccint_el1); -} - -static void __hyp_text __debug_restore_state(struct kvm_vcpu *vcpu, - struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) -{ - u64 aa64dfr0; - int brps, wrps; - - aa64dfr0 = read_sysreg(id_aa64dfr0_el1); - - brps = (aa64dfr0 >> 12) & 0xf; - wrps = (aa64dfr0 >> 20) & 0xf; - - restore_debug(dbg->dbg_bcr, dbgbcr, brps); - restore_debug(dbg->dbg_bvr, dbgbvr, brps); - restore_debug(dbg->dbg_wcr, dbgwcr, wrps); - restore_debug(dbg->dbg_wvr, dbgwvr, wrps); - - write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1); -} - -void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) +void __debug_switch_to_guest(struct kvm_vcpu *vcpu) { - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - struct kvm_guest_debug_arch *host_dbg; - struct kvm_guest_debug_arch *guest_dbg; - - /* - * Non-VHE: Disable and flush SPE data generation - * VHE: The vcpu can run, but it can't hide. - */ - if (!has_vhe()) - __debug_save_spe_nvhe(&vcpu->arch.host_debug_state.pmscr_el1); - - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) - return; - - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - guest_ctxt = &vcpu->arch.ctxt; - host_dbg = &vcpu->arch.host_debug_state.regs; - guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); - - __debug_save_state(vcpu, host_dbg, host_ctxt); - __debug_restore_state(vcpu, guest_dbg, guest_ctxt); + __debug_switch_to_guest_common(vcpu); } -void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) +void __debug_switch_to_host(struct kvm_vcpu *vcpu) { - struct kvm_cpu_context *host_ctxt; - struct kvm_cpu_context *guest_ctxt; - struct kvm_guest_debug_arch *host_dbg; - struct kvm_guest_debug_arch *guest_dbg; - - if (!has_vhe()) - __debug_restore_spe_nvhe(vcpu->arch.host_debug_state.pmscr_el1); - - if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) - return; - - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - guest_ctxt = &vcpu->arch.ctxt; - host_dbg = &vcpu->arch.host_debug_state.regs; - guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); - - __debug_save_state(vcpu, guest_dbg, guest_ctxt); - __debug_restore_state(vcpu, host_dbg, host_ctxt); - - vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY; + __debug_switch_to_host_common(vcpu); } -u32 __hyp_text __kvm_get_mdcr_el2(void) +u32 __kvm_get_mdcr_el2(void) { return read_sysreg(mdcr_el2); } diff --git a/arch/arm64/kvm/hyp/debug-sr.h b/arch/arm64/kvm/hyp/debug-sr.h new file mode 100644 index 000000000000..6a94553493a1 --- /dev/null +++ b/arch/arm64/kvm/hyp/debug-sr.h @@ -0,0 +1,172 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#ifndef __ARM64_KVM_HYP_DEBUG_SR_H__ +#define __ARM64_KVM_HYP_DEBUG_SR_H__ + +#include +#include + +#include +#include +#include +#include + +#define read_debug(r,n) read_sysreg(r##n##_el1) +#define write_debug(v,r,n) write_sysreg(v, r##n##_el1) + +#define save_debug(ptr,reg,nr) \ + switch (nr) { \ + case 15: ptr[15] = read_debug(reg, 15); \ + /* Fall through */ \ + case 14: ptr[14] = read_debug(reg, 14); \ + /* Fall through */ \ + case 13: ptr[13] = read_debug(reg, 13); \ + /* Fall through */ \ + case 12: ptr[12] = read_debug(reg, 12); \ + /* Fall through */ \ + case 11: ptr[11] = read_debug(reg, 11); \ + /* Fall through */ \ + case 10: ptr[10] = read_debug(reg, 10); \ + /* Fall through */ \ + case 9: ptr[9] = read_debug(reg, 9); \ + /* Fall through */ \ + case 8: ptr[8] = read_debug(reg, 8); \ + /* Fall through */ \ + case 7: ptr[7] = read_debug(reg, 7); \ + /* Fall through */ \ + case 6: ptr[6] = read_debug(reg, 6); \ + /* Fall through */ \ + case 5: ptr[5] = read_debug(reg, 5); \ + /* Fall through */ \ + case 4: ptr[4] = read_debug(reg, 4); \ + /* Fall through */ \ + case 3: ptr[3] = read_debug(reg, 3); \ + /* Fall through */ \ + case 2: ptr[2] = read_debug(reg, 2); \ + /* Fall through */ \ + case 1: ptr[1] = read_debug(reg, 1); \ + /* Fall through */ \ + default: ptr[0] = read_debug(reg, 0); \ + } + +#define restore_debug(ptr,reg,nr) \ + switch (nr) { \ + case 15: write_debug(ptr[15], reg, 15); \ + /* Fall through */ \ + case 14: write_debug(ptr[14], reg, 14); \ + /* Fall through */ \ + case 13: write_debug(ptr[13], reg, 13); \ + /* Fall through */ \ + case 12: write_debug(ptr[12], reg, 12); \ + /* Fall through */ \ + case 11: write_debug(ptr[11], reg, 11); \ + /* Fall through */ \ + case 10: write_debug(ptr[10], reg, 10); \ + /* Fall through */ \ + case 9: write_debug(ptr[9], reg, 9); \ + /* Fall through */ \ + case 8: write_debug(ptr[8], reg, 8); \ + /* Fall through */ \ + case 7: write_debug(ptr[7], reg, 7); \ + /* Fall through */ \ + case 6: write_debug(ptr[6], reg, 6); \ + /* Fall through */ \ + case 5: write_debug(ptr[5], reg, 5); \ + /* Fall through */ \ + case 4: write_debug(ptr[4], reg, 4); \ + /* Fall through */ \ + case 3: write_debug(ptr[3], reg, 3); \ + /* Fall through */ \ + case 2: write_debug(ptr[2], reg, 2); \ + /* Fall through */ \ + case 1: write_debug(ptr[1], reg, 1); \ + /* Fall through */ \ + default: write_debug(ptr[0], reg, 0); \ + } + +static inline void __hyp_text +__debug_save_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) +{ + u64 aa64dfr0; + int brps, wrps; + + aa64dfr0 = read_sysreg(id_aa64dfr0_el1); + brps = (aa64dfr0 >> 12) & 0xf; + wrps = (aa64dfr0 >> 20) & 0xf; + + save_debug(dbg->dbg_bcr, dbgbcr, brps); + save_debug(dbg->dbg_bvr, dbgbvr, brps); + save_debug(dbg->dbg_wcr, dbgwcr, wrps); + save_debug(dbg->dbg_wvr, dbgwvr, wrps); + + ctxt->sys_regs[MDCCINT_EL1] = read_sysreg(mdccint_el1); +} + +static inline void __hyp_text +__debug_restore_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) +{ + u64 aa64dfr0; + int brps, wrps; + + aa64dfr0 = read_sysreg(id_aa64dfr0_el1); + + brps = (aa64dfr0 >> 12) & 0xf; + wrps = (aa64dfr0 >> 20) & 0xf; + + restore_debug(dbg->dbg_bcr, dbgbcr, brps); + restore_debug(dbg->dbg_bvr, dbgbvr, brps); + restore_debug(dbg->dbg_wcr, dbgwcr, wrps); + restore_debug(dbg->dbg_wvr, dbgwvr, wrps); + + write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1); +} + +static inline void __hyp_text +__debug_switch_to_guest_common(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + struct kvm_guest_debug_arch *host_dbg; + struct kvm_guest_debug_arch *guest_dbg; + + if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) + return; + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + guest_ctxt = &vcpu->arch.ctxt; + host_dbg = &vcpu->arch.host_debug_state.regs; + guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); + + __debug_save_state(vcpu, host_dbg, host_ctxt); + __debug_restore_state(vcpu, guest_dbg, guest_ctxt); +} + +static inline void __hyp_text +__debug_switch_to_host_common(struct kvm_vcpu *vcpu) +{ + struct kvm_cpu_context *host_ctxt; + struct kvm_cpu_context *guest_ctxt; + struct kvm_guest_debug_arch *host_dbg; + struct kvm_guest_debug_arch *guest_dbg; + + if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) + return; + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + guest_ctxt = &vcpu->arch.ctxt; + host_dbg = &vcpu->arch.host_debug_state.regs; + guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr); + + __debug_save_state(vcpu, guest_dbg, guest_ctxt); + __debug_restore_state(vcpu, host_dbg, host_ctxt); + + vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY; +} + +#endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index bbfd9d27d742..33a80da34154 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := switch.o tlb.o ../hyp-entry.o +obj-y := debug-sr.o switch.o tlb.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c new file mode 100644 index 000000000000..b3752cfdcf3d --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -0,0 +1,77 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include + +#include +#include +#include +#include + +#include "../debug-sr.h" + +static void __hyp_text __debug_save_spe(u64 *pmscr_el1) +{ + u64 reg; + + /* Clear pmscr in case of early return */ + *pmscr_el1 = 0; + + /* SPE present on this CPU? */ + if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1), + ID_AA64DFR0_PMSVER_SHIFT)) + return; + + /* Yes; is it owned by EL3? */ + reg = read_sysreg_s(SYS_PMBIDR_EL1); + if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT)) + return; + + /* No; is the host actually using the thing? */ + reg = read_sysreg_s(SYS_PMBLIMITR_EL1); + if (!(reg & BIT(SYS_PMBLIMITR_EL1_E_SHIFT))) + return; + + /* Yes; save the control register and disable data generation */ + *pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1); + write_sysreg_s(0, SYS_PMSCR_EL1); + isb(); + + /* Now drain all buffered data to memory */ + psb_csync(); + dsb(nsh); +} + +static void __hyp_text __debug_restore_spe(u64 pmscr_el1) +{ + if (!pmscr_el1) + return; + + /* The host page table is installed, but not yet synchronised */ + isb(); + + /* Re-enable data generation */ + write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1); +} + +void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) +{ + /* Disable and flush SPE data generation */ + __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1); + __debug_switch_to_guest_common(vcpu); +} + +void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) +{ + __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1); + __debug_switch_to_host_common(vcpu); +} + +u32 __hyp_text __kvm_get_mdcr_el2(void) +{ + return read_sysreg(mdcr_el2); +} From patchwork Fri May 15 10:58:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551213 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3EB0D618 for ; Fri, 15 May 2020 11:02:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 13CB320709 for ; Fri, 15 May 2020 11:02:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="F6k1Wsoz"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="XU0u2YrN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 13CB320709 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=snwHcNlUajZmN2zxX1JDOpMw4Yhfd2ZI/fKjTUOKt3k=; b=F6k1Wsoz/X7joS BS4EYT0KkBdyugJnR8jtmDkzEuJUugTEjNin0oZgolqYdx994PUhoxKuyYJrEe3G5g747yjwRN3vh A8VFeLaXhKlictayRlbVDZq33EPVpKpFRUfL8XRh/+Ya5pRaodkb7Q/gDSi8iasDw4xbDG9FfpAvi Qua4S/LOV8FrzhpxIi/4uzNB+40ghCyvzGBU9gKh+3PCLYIEJhOL/3fPWaEy9f2eg87mc/qhh/HnL MjYxUro58e2K58mnu7hXnK93Ipj2F5BGinO+1ycc4ZTjqQXNmL40EK57aQmTzvsFzE/reV/7YQCe1 KBvMIgCClPYfHKEVDwsQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY76-0008IR-4i; Fri, 15 May 2020 11:02:48 +0000 Received: from mail-wr1-x441.google.com ([2a00:1450:4864:20::441]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3d-0001zW-SC for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:21 +0000 Received: by mail-wr1-x441.google.com with SMTP id y3so3064532wrt.1 for ; Fri, 15 May 2020 03:59:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hM22Oc3iaGJZQi2Hb417cMWcQUGKlvO+9NSMswcbGUU=; b=XU0u2YrNrhdFFOBAon4Cyyi9WgYfjngjvY7juJWSXOtI0ch3u9D9rh4aRc/LljtpLU KfKxvxzv24MCI3UCzsutB+tEy34bh5+zwRuFvbyIhM8R4NVBBYiC9xjekxQFBXm0F/8S pVocrYBESunKnoY7B5lBLhuntc1AZwmldGId7PlsIXLavv8KJC4GbQH2KWmQprZMD7l1 67qIQwEGvK5oqNbQJ/2SwGvzt0/Ekpq+hbf7yRUfS4oHWDGBFycWJ5yCAAMH2X4nGP1b +/QSBrngPT7kiqLrYJHX/O0VL9YkYZFwbq/CqXoc1CddUK/iC9LGxEYYQufqDNpcs234 fxXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hM22Oc3iaGJZQi2Hb417cMWcQUGKlvO+9NSMswcbGUU=; b=M6bsAVUyGjxNsjjqZ2IC7ZeUikyiJSUknGnFg2lt5v9pOgYvfqFch2CKDxCxuVt+ud aGWz+QgicpbOAXwKEyoKXB4a7Ml3J/a8nOQCDZVAbNgduukF3f3GJ+62gPcpo8HPnLS6 i1BrkQN1wDqRhytosCPKSTmfj+IjPrVLYB3eaN65sKYFFRvvnH+L7j7AilXZ1/lyvbUf TcRU6pALhCFmSoRO62mnmf+KOBc2d0o/1iKxs7m6ngvLNP626I8Ri2R6Ewl3vFAF6U4N p/TOPm4Txitxe6JD3GTLN1sDD13xxqZ/UFcM1oCE22Z9TF6FXlEbdh4gdqyZ2v3yL2aa vFgQ== X-Gm-Message-State: AOAM531UHASd95t/1x2qbRH1+ttEUcRAGK+8XFUoB+tGetouDa60XJlC z1gQBDM/jPQlL5F4aLSAMdu1tw== X-Google-Smtp-Source: ABdhPJwpT/6+E6gj9rV8ata3rN1O8KORyqEzKF5oD+D/ynnzYJBps8FNtlkMEpeH/7wDLXQZcdFBKw== X-Received: by 2002:adf:fe45:: with SMTP id m5mr3589774wrs.257.1589540351617; Fri, 15 May 2020 03:59:11 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id z132sm3331611wmc.29.2020.05.15.03.59.10 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:59:10 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 09/14] arm64: kvm: Split hyp/sysreg-sr.c to VHE/nVHE Date: Fri, 15 May 2020 11:58:36 +0100 Message-Id: <20200515105841.73532-10-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035914_476130_85A16B06 X-CRM114-Status: GOOD ( 19.60 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:441 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. sysreg-sr.c contains KVM's code for saving/restoring system registers, with some parts shared between VHE/nVHE. These common routines are moved to sysreg-sr.h, VHE-specific code is left in sysreg-sr.c and nVHE-specific code is moved to nvhe/sysreg-sr.c. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_asm.h | 2 + arch/arm64/include/asm/kvm_host.h | 2 - arch/arm64/include/asm/kvm_hyp.h | 4 + arch/arm64/kernel/image-vars.h | 5 - arch/arm64/kvm/hyp/nvhe/Makefile | 2 +- arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 56 +++++++ arch/arm64/kvm/hyp/sysreg-sr.c | 233 ++-------------------------- arch/arm64/kvm/hyp/sysreg-sr.h | 223 ++++++++++++++++++++++++++ 8 files changed, 299 insertions(+), 228 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/sysreg-sr.c create mode 100644 arch/arm64/kvm/hyp/sysreg-sr.h diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index c0ba15c9b190..1f3a65f1b354 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -91,6 +91,8 @@ extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); +extern void __kvm_enable_ssbs(void); + extern u64 __vgic_v3_get_ich_vtr_el2(void); extern u64 __vgic_v3_read_vmcr(void); extern void __vgic_v3_write_vmcr(u32 vmcr); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 132233b6d853..ef48866214f8 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -532,8 +532,6 @@ static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt) cpu_ctxt->sys_regs[MPIDR_EL1] = read_cpuid_mpidr(); } -void __kvm_enable_ssbs(void); - static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, unsigned long hyp_stack_ptr, unsigned long vector_ptr) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 0f535692d1d8..2084fd3186a7 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -67,12 +67,16 @@ int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); void __timer_enable_traps(struct kvm_vcpu *vcpu); void __timer_disable_traps(struct kvm_vcpu *vcpu); +#ifdef __KVM_NVHE_HYPERVISOR__ void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt); void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt); +#else void sysreg_save_host_state_vhe(struct kvm_cpu_context *ctxt); void sysreg_restore_host_state_vhe(struct kvm_cpu_context *ctxt); void sysreg_save_guest_state_vhe(struct kvm_cpu_context *ctxt); void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt); +#endif + void __sysreg32_save_state(struct kvm_vcpu *vcpu); void __sysreg32_restore_state(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 5de3a5998bcd..bf9053d65ad7 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -66,13 +66,8 @@ __kvm_nvhe___fpsimd_save_state = __fpsimd_save_state; __kvm_nvhe___guest_enter = __guest_enter; __kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___icache_flags = __icache_flags; -__kvm_nvhe___kvm_enable_ssbs = __kvm_enable_ssbs; __kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; __kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__kvm_nvhe___sysreg32_restore_state = __sysreg32_restore_state; -__kvm_nvhe___sysreg32_save_state = __sysreg32_save_state; -__kvm_nvhe___sysreg_restore_state_nvhe = __sysreg_restore_state_nvhe; -__kvm_nvhe___sysreg_save_state_nvhe = __sysreg_save_state_nvhe; __kvm_nvhe___timer_disable_traps = __timer_disable_traps; __kvm_nvhe___timer_enable_traps = __timer_enable_traps; __kvm_nvhe___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 33a80da34154..8157f6fa4c99 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := debug-sr.o switch.o tlb.o ../hyp-entry.o +obj-y := sysreg-sr.o debug-sr.o switch.o tlb.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c new file mode 100644 index 000000000000..55ab924d841a --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2012-2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include + +#include +#include +#include +#include + +#include "../sysreg-sr.h" + +/* + * Non-VHE: Both host and guest must save everything. + */ + +void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) +{ + __sysreg_save_el1_state(ctxt); + __sysreg_save_common_state(ctxt); + __sysreg_save_user_state(ctxt); + __sysreg_save_el2_return_state(ctxt); +} + +void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) +{ + __sysreg_restore_el1_state(ctxt); + __sysreg_restore_common_state(ctxt); + __sysreg_restore_user_state(ctxt); + __sysreg_restore_el2_return_state(ctxt); +} + +void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu) +{ + ___sysreg32_save_state(vcpu); +} + +void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) +{ + ___sysreg32_restore_state(vcpu); +} + +void __hyp_text __kvm_enable_ssbs(void) +{ + u64 tmp; + + asm volatile( + "mrs %0, sctlr_el2\n" + "orr %0, %0, %1\n" + "msr sctlr_el2, %0" + : "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS)); +} diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c index 7a261ace2405..b373dc320f5c 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/sysreg-sr.c @@ -12,9 +12,9 @@ #include #include +#include "sysreg-sr.h" + /* - * Non-VHE: Both host and guest must save everything. - * * VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and pstate, * which are handled as part of the el2 return state) on every switch. * tpidr_el0 and tpidrro_el0 only need to be switched when going @@ -23,66 +23,6 @@ * classes are handled as part of kvm_arch_vcpu_load and kvm_arch_vcpu_put. */ -static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt) -{ - ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1); - - /* - * The host arm64 Linux uses sp_el0 to point to 'current' and it must - * therefore be saved/restored on every entry/exit to/from the guest. - */ - ctxt->gp_regs.regs.sp = read_sysreg(sp_el0); -} - -static void __hyp_text __sysreg_save_user_state(struct kvm_cpu_context *ctxt) -{ - ctxt->sys_regs[TPIDR_EL0] = read_sysreg(tpidr_el0); - ctxt->sys_regs[TPIDRRO_EL0] = read_sysreg(tpidrro_el0); -} - -static void __hyp_text __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) -{ - ctxt->sys_regs[CSSELR_EL1] = read_sysreg(csselr_el1); - ctxt->sys_regs[SCTLR_EL1] = read_sysreg_el1(SYS_SCTLR); - ctxt->sys_regs[ACTLR_EL1] = read_sysreg(actlr_el1); - ctxt->sys_regs[CPACR_EL1] = read_sysreg_el1(SYS_CPACR); - ctxt->sys_regs[TTBR0_EL1] = read_sysreg_el1(SYS_TTBR0); - ctxt->sys_regs[TTBR1_EL1] = read_sysreg_el1(SYS_TTBR1); - ctxt->sys_regs[TCR_EL1] = read_sysreg_el1(SYS_TCR); - ctxt->sys_regs[ESR_EL1] = read_sysreg_el1(SYS_ESR); - ctxt->sys_regs[AFSR0_EL1] = read_sysreg_el1(SYS_AFSR0); - ctxt->sys_regs[AFSR1_EL1] = read_sysreg_el1(SYS_AFSR1); - ctxt->sys_regs[FAR_EL1] = read_sysreg_el1(SYS_FAR); - ctxt->sys_regs[MAIR_EL1] = read_sysreg_el1(SYS_MAIR); - ctxt->sys_regs[VBAR_EL1] = read_sysreg_el1(SYS_VBAR); - ctxt->sys_regs[CONTEXTIDR_EL1] = read_sysreg_el1(SYS_CONTEXTIDR); - ctxt->sys_regs[AMAIR_EL1] = read_sysreg_el1(SYS_AMAIR); - ctxt->sys_regs[CNTKCTL_EL1] = read_sysreg_el1(SYS_CNTKCTL); - ctxt->sys_regs[PAR_EL1] = read_sysreg(par_el1); - ctxt->sys_regs[TPIDR_EL1] = read_sysreg(tpidr_el1); - - ctxt->gp_regs.sp_el1 = read_sysreg(sp_el1); - ctxt->gp_regs.elr_el1 = read_sysreg_el1(SYS_ELR); - ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(SYS_SPSR); -} - -static void __hyp_text __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) -{ - ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR); - ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); - - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) - ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2); -} - -void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) -{ - __sysreg_save_el1_state(ctxt); - __sysreg_save_common_state(ctxt); - __sysreg_save_user_state(ctxt); - __sysreg_save_el2_return_state(ctxt); -} - void sysreg_save_host_state_vhe(struct kvm_cpu_context *ctxt) { __sysreg_save_common_state(ctxt); @@ -96,116 +36,6 @@ void sysreg_save_guest_state_vhe(struct kvm_cpu_context *ctxt) } NOKPROBE_SYMBOL(sysreg_save_guest_state_vhe); -static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt) -{ - write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1); - - /* - * The host arm64 Linux uses sp_el0 to point to 'current' and it must - * therefore be saved/restored on every entry/exit to/from the guest. - */ - write_sysreg(ctxt->gp_regs.regs.sp, sp_el0); -} - -static void __hyp_text __sysreg_restore_user_state(struct kvm_cpu_context *ctxt) -{ - write_sysreg(ctxt->sys_regs[TPIDR_EL0], tpidr_el0); - write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0); -} - -static void __hyp_text __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) -{ - write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); - write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); - - if (!cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { - write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); - write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); - } else if (!ctxt->__hyp_running_vcpu) { - /* - * Must only be done for guest registers, hence the context - * test. We're coming from the host, so SCTLR.M is already - * set. Pairs with nVHE's __activate_traps(). - */ - write_sysreg_el1((ctxt->sys_regs[TCR_EL1] | - TCR_EPD1_MASK | TCR_EPD0_MASK), - SYS_TCR); - isb(); - } - - write_sysreg(ctxt->sys_regs[ACTLR_EL1], actlr_el1); - write_sysreg_el1(ctxt->sys_regs[CPACR_EL1], SYS_CPACR); - write_sysreg_el1(ctxt->sys_regs[TTBR0_EL1], SYS_TTBR0); - write_sysreg_el1(ctxt->sys_regs[TTBR1_EL1], SYS_TTBR1); - write_sysreg_el1(ctxt->sys_regs[ESR_EL1], SYS_ESR); - write_sysreg_el1(ctxt->sys_regs[AFSR0_EL1], SYS_AFSR0); - write_sysreg_el1(ctxt->sys_regs[AFSR1_EL1], SYS_AFSR1); - write_sysreg_el1(ctxt->sys_regs[FAR_EL1], SYS_FAR); - write_sysreg_el1(ctxt->sys_regs[MAIR_EL1], SYS_MAIR); - write_sysreg_el1(ctxt->sys_regs[VBAR_EL1], SYS_VBAR); - write_sysreg_el1(ctxt->sys_regs[CONTEXTIDR_EL1],SYS_CONTEXTIDR); - write_sysreg_el1(ctxt->sys_regs[AMAIR_EL1], SYS_AMAIR); - write_sysreg_el1(ctxt->sys_regs[CNTKCTL_EL1], SYS_CNTKCTL); - write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1); - write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1); - - if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE) && - ctxt->__hyp_running_vcpu) { - /* - * Must only be done for host registers, hence the context - * test. Pairs with nVHE's __deactivate_traps(). - */ - isb(); - /* - * At this stage, and thanks to the above isb(), S2 is - * deconfigured and disabled. We can now restore the host's - * S1 configuration: SCTLR, and only then TCR. - */ - write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); - isb(); - write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); - } - - write_sysreg(ctxt->gp_regs.sp_el1, sp_el1); - write_sysreg_el1(ctxt->gp_regs.elr_el1, SYS_ELR); - write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],SYS_SPSR); -} - -static void __hyp_text -__sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) -{ - u64 pstate = ctxt->gp_regs.regs.pstate; - u64 mode = pstate & PSR_AA32_MODE_MASK; - - /* - * Safety check to ensure we're setting the CPU up to enter the guest - * in a less privileged mode. - * - * If we are attempting a return to EL2 or higher in AArch64 state, - * program SPSR_EL2 with M=EL2h and the IL bit set which ensures that - * we'll take an illegal exception state exception immediately after - * the ERET to the guest. Attempts to return to AArch32 Hyp will - * result in an illegal exception return because EL2's execution state - * is determined by SCR_EL3.RW. - */ - if (!(mode & PSR_MODE32_BIT) && mode >= PSR_MODE_EL2t) - pstate = PSR_MODE_EL2h | PSR_IL_BIT; - - write_sysreg_el2(ctxt->gp_regs.regs.pc, SYS_ELR); - write_sysreg_el2(pstate, SYS_SPSR); - - if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) - write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2); -} - -void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) -{ - __sysreg_restore_el1_state(ctxt); - __sysreg_restore_common_state(ctxt); - __sysreg_restore_user_state(ctxt); - __sysreg_restore_el2_return_state(ctxt); -} - void sysreg_restore_host_state_vhe(struct kvm_cpu_context *ctxt) { __sysreg_restore_common_state(ctxt); @@ -219,48 +49,22 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context *ctxt) } NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe); -void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu) +void __sysreg32_save_state(struct kvm_vcpu *vcpu) { - u64 *spsr, *sysreg; - - if (!vcpu_el1_is_32bit(vcpu)) - return; - - spsr = vcpu->arch.ctxt.gp_regs.spsr; - sysreg = vcpu->arch.ctxt.sys_regs; - - spsr[KVM_SPSR_ABT] = read_sysreg(spsr_abt); - spsr[KVM_SPSR_UND] = read_sysreg(spsr_und); - spsr[KVM_SPSR_IRQ] = read_sysreg(spsr_irq); - spsr[KVM_SPSR_FIQ] = read_sysreg(spsr_fiq); - - sysreg[DACR32_EL2] = read_sysreg(dacr32_el2); - sysreg[IFSR32_EL2] = read_sysreg(ifsr32_el2); - - if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - sysreg[DBGVCR32_EL2] = read_sysreg(dbgvcr32_el2); + ___sysreg32_save_state(vcpu); } -void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) +void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { - u64 *spsr, *sysreg; - - if (!vcpu_el1_is_32bit(vcpu)) - return; - - spsr = vcpu->arch.ctxt.gp_regs.spsr; - sysreg = vcpu->arch.ctxt.sys_regs; - - write_sysreg(spsr[KVM_SPSR_ABT], spsr_abt); - write_sysreg(spsr[KVM_SPSR_UND], spsr_und); - write_sysreg(spsr[KVM_SPSR_IRQ], spsr_irq); - write_sysreg(spsr[KVM_SPSR_FIQ], spsr_fiq); - - write_sysreg(sysreg[DACR32_EL2], dacr32_el2); - write_sysreg(sysreg[IFSR32_EL2], ifsr32_el2); + ___sysreg32_restore_state(vcpu); +} - if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) - write_sysreg(sysreg[DBGVCR32_EL2], dbgvcr32_el2); +void __kvm_enable_ssbs(void) +{ + /* + * Nothing to do on VHE. Needed because VHE and nVHE hyp code + * must expose the same interface. + */ } /** @@ -329,14 +133,3 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) vcpu->arch.sysregs_loaded_on_cpu = false; } - -void __hyp_text __kvm_enable_ssbs(void) -{ - u64 tmp; - - asm volatile( - "mrs %0, sctlr_el2\n" - "orr %0, %0, %1\n" - "msr sctlr_el2, %0" - : "=&r" (tmp) : "L" (SCTLR_ELx_DSSBS)); -} diff --git a/arch/arm64/kvm/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/sysreg-sr.h new file mode 100644 index 000000000000..2e22cf23dbd5 --- /dev/null +++ b/arch/arm64/kvm/hyp/sysreg-sr.h @@ -0,0 +1,223 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2012-2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#ifndef __ARM64_KVM_HYP_SYSREG_SR_H__ +#define __ARM64_KVM_HYP_SYSREG_SR_H__ + +#include +#include + +#include +#include +#include +#include + +static inline void __hyp_text +__sysreg_save_common_state(struct kvm_cpu_context *ctxt) +{ + ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1); + + /* + * The host arm64 Linux uses sp_el0 to point to 'current' and it must + * therefore be saved/restored on every entry/exit to/from the guest. + */ + ctxt->gp_regs.regs.sp = read_sysreg(sp_el0); +} + +static inline void __hyp_text +__sysreg_save_user_state(struct kvm_cpu_context *ctxt) +{ + ctxt->sys_regs[TPIDR_EL0] = read_sysreg(tpidr_el0); + ctxt->sys_regs[TPIDRRO_EL0] = read_sysreg(tpidrro_el0); +} + +static inline void __hyp_text +__sysreg_save_el1_state(struct kvm_cpu_context *ctxt) +{ + ctxt->sys_regs[CSSELR_EL1] = read_sysreg(csselr_el1); + ctxt->sys_regs[SCTLR_EL1] = read_sysreg_el1(SYS_SCTLR); + ctxt->sys_regs[ACTLR_EL1] = read_sysreg(actlr_el1); + ctxt->sys_regs[CPACR_EL1] = read_sysreg_el1(SYS_CPACR); + ctxt->sys_regs[TTBR0_EL1] = read_sysreg_el1(SYS_TTBR0); + ctxt->sys_regs[TTBR1_EL1] = read_sysreg_el1(SYS_TTBR1); + ctxt->sys_regs[TCR_EL1] = read_sysreg_el1(SYS_TCR); + ctxt->sys_regs[ESR_EL1] = read_sysreg_el1(SYS_ESR); + ctxt->sys_regs[AFSR0_EL1] = read_sysreg_el1(SYS_AFSR0); + ctxt->sys_regs[AFSR1_EL1] = read_sysreg_el1(SYS_AFSR1); + ctxt->sys_regs[FAR_EL1] = read_sysreg_el1(SYS_FAR); + ctxt->sys_regs[MAIR_EL1] = read_sysreg_el1(SYS_MAIR); + ctxt->sys_regs[VBAR_EL1] = read_sysreg_el1(SYS_VBAR); + ctxt->sys_regs[CONTEXTIDR_EL1] = read_sysreg_el1(SYS_CONTEXTIDR); + ctxt->sys_regs[AMAIR_EL1] = read_sysreg_el1(SYS_AMAIR); + ctxt->sys_regs[CNTKCTL_EL1] = read_sysreg_el1(SYS_CNTKCTL); + ctxt->sys_regs[PAR_EL1] = read_sysreg(par_el1); + ctxt->sys_regs[TPIDR_EL1] = read_sysreg(tpidr_el1); + + ctxt->gp_regs.sp_el1 = read_sysreg(sp_el1); + ctxt->gp_regs.elr_el1 = read_sysreg_el1(SYS_ELR); + ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(SYS_SPSR); +} + +static inline void __hyp_text +__sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) +{ + ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR); + ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); + + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) + ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2); +} + +static inline void __hyp_text +__sysreg_restore_common_state(struct kvm_cpu_context *ctxt) +{ + write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1); + + /* + * The host arm64 Linux uses sp_el0 to point to 'current' and it must + * therefore be saved/restored on every entry/exit to/from the guest. + */ + write_sysreg(ctxt->gp_regs.regs.sp, sp_el0); +} + +static inline void __hyp_text +__sysreg_restore_user_state(struct kvm_cpu_context *ctxt) +{ + write_sysreg(ctxt->sys_regs[TPIDR_EL0], tpidr_el0); + write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0); +} + +static inline void __hyp_text +__sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) +{ + write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); + write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); + + if (!cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { + write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); + write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); + } else if (!ctxt->__hyp_running_vcpu) { + /* + * Must only be done for guest registers, hence the context + * test. We're coming from the host, so SCTLR.M is already + * set. Pairs with nVHE's __activate_traps(). + */ + write_sysreg_el1((ctxt->sys_regs[TCR_EL1] | + TCR_EPD1_MASK | TCR_EPD0_MASK), + SYS_TCR); + isb(); + } + + write_sysreg(ctxt->sys_regs[ACTLR_EL1], actlr_el1); + write_sysreg_el1(ctxt->sys_regs[CPACR_EL1], SYS_CPACR); + write_sysreg_el1(ctxt->sys_regs[TTBR0_EL1], SYS_TTBR0); + write_sysreg_el1(ctxt->sys_regs[TTBR1_EL1], SYS_TTBR1); + write_sysreg_el1(ctxt->sys_regs[ESR_EL1], SYS_ESR); + write_sysreg_el1(ctxt->sys_regs[AFSR0_EL1], SYS_AFSR0); + write_sysreg_el1(ctxt->sys_regs[AFSR1_EL1], SYS_AFSR1); + write_sysreg_el1(ctxt->sys_regs[FAR_EL1], SYS_FAR); + write_sysreg_el1(ctxt->sys_regs[MAIR_EL1], SYS_MAIR); + write_sysreg_el1(ctxt->sys_regs[VBAR_EL1], SYS_VBAR); + write_sysreg_el1(ctxt->sys_regs[CONTEXTIDR_EL1],SYS_CONTEXTIDR); + write_sysreg_el1(ctxt->sys_regs[AMAIR_EL1], SYS_AMAIR); + write_sysreg_el1(ctxt->sys_regs[CNTKCTL_EL1], SYS_CNTKCTL); + write_sysreg(ctxt->sys_regs[PAR_EL1], par_el1); + write_sysreg(ctxt->sys_regs[TPIDR_EL1], tpidr_el1); + + if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE) && + ctxt->__hyp_running_vcpu) { + /* + * Must only be done for host registers, hence the context + * test. Pairs with nVHE's __deactivate_traps(). + */ + isb(); + /* + * At this stage, and thanks to the above isb(), S2 is + * deconfigured and disabled. We can now restore the host's + * S1 configuration: SCTLR, and only then TCR. + */ + write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1], SYS_SCTLR); + isb(); + write_sysreg_el1(ctxt->sys_regs[TCR_EL1], SYS_TCR); + } + + write_sysreg(ctxt->gp_regs.sp_el1, sp_el1); + write_sysreg_el1(ctxt->gp_regs.elr_el1, SYS_ELR); + write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],SYS_SPSR); +} + +static inline void __hyp_text +__sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) +{ + u64 pstate = ctxt->gp_regs.regs.pstate; + u64 mode = pstate & PSR_AA32_MODE_MASK; + + /* + * Safety check to ensure we're setting the CPU up to enter the guest + * in a less privileged mode. + * + * If we are attempting a return to EL2 or higher in AArch64 state, + * program SPSR_EL2 with M=EL2h and the IL bit set which ensures that + * we'll take an illegal exception state exception immediately after + * the ERET to the guest. Attempts to return to AArch32 Hyp will + * result in an illegal exception return because EL2's execution state + * is determined by SCR_EL3.RW. + */ + if (!(mode & PSR_MODE32_BIT) && mode >= PSR_MODE_EL2t) + pstate = PSR_MODE_EL2h | PSR_IL_BIT; + + write_sysreg_el2(ctxt->gp_regs.regs.pc, SYS_ELR); + write_sysreg_el2(pstate, SYS_SPSR); + + if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) + write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2); +} + +static inline void __hyp_text ___sysreg32_save_state(struct kvm_vcpu *vcpu) +{ + u64 *spsr, *sysreg; + + if (!vcpu_el1_is_32bit(vcpu)) + return; + + spsr = vcpu->arch.ctxt.gp_regs.spsr; + sysreg = vcpu->arch.ctxt.sys_regs; + + spsr[KVM_SPSR_ABT] = read_sysreg(spsr_abt); + spsr[KVM_SPSR_UND] = read_sysreg(spsr_und); + spsr[KVM_SPSR_IRQ] = read_sysreg(spsr_irq); + spsr[KVM_SPSR_FIQ] = read_sysreg(spsr_fiq); + + sysreg[DACR32_EL2] = read_sysreg(dacr32_el2); + sysreg[IFSR32_EL2] = read_sysreg(ifsr32_el2); + + if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) + sysreg[DBGVCR32_EL2] = read_sysreg(dbgvcr32_el2); +} + +static inline void __hyp_text ___sysreg32_restore_state(struct kvm_vcpu *vcpu) +{ + u64 *spsr, *sysreg; + + if (!vcpu_el1_is_32bit(vcpu)) + return; + + spsr = vcpu->arch.ctxt.gp_regs.spsr; + sysreg = vcpu->arch.ctxt.sys_regs; + + write_sysreg(spsr[KVM_SPSR_ABT], spsr_abt); + write_sysreg(spsr[KVM_SPSR_UND], spsr_und); + write_sysreg(spsr[KVM_SPSR_IRQ], spsr_irq); + write_sysreg(spsr[KVM_SPSR_FIQ], spsr_fiq); + + write_sysreg(sysreg[DACR32_EL2], dacr32_el2); + write_sysreg(sysreg[IFSR32_EL2], ifsr32_el2); + + if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) + write_sysreg(sysreg[DBGVCR32_EL2], dbgvcr32_el2); +} + +#endif /* __ARM64_KVM_HYP_SYSREG_SR_H__ */ From patchwork Fri May 15 10:58:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551209 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DFB20618 for ; Fri, 15 May 2020 11:02:05 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AAB0520709 for ; Fri, 15 May 2020 11:02:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="n1k0UyhV"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="NlJH/3MR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AAB0520709 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=v/ODts8Iq6zglilHjqnZauQBZhRj5+X5lcsvzgh3K84=; b=n1k0UyhVi6ug6Q Sqkca5Xz9bdv5CqtDFLGumoIJlatMvakpwhvJpfpnNrwlrf+TR0SD+igupuliRzi2YCSdS1AWkPsJ PGX3pLGFSgs+lnleV7ntA7Z1XIVtR7IEK5uXm4jbBOknLI6le4WCAY2kmSzsFqfGZY+J7wA1MJM2I pFCtx85Gdv/pxpHa19wN/MHJrdzIoCsaU2GQzPMG4+Ezb+/bGS+icvfjlWj2r+xGKdAxQ6KIEGjpF cPWdj4rQqvTKW0UgKfeT3Up0g8RBgDWLNx1hTi281qg90okDyzx0KCV1JXw817yuPNlAnwCjMu5mW Hlt6CwCMnRa8Zx338cww==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY6I-0007a4-7z; Fri, 15 May 2020 11:01:58 +0000 Received: from mail-wr1-x441.google.com ([2a00:1450:4864:20::441]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3f-00020T-S9 for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:20 +0000 Received: by mail-wr1-x441.google.com with SMTP id e1so3028218wrt.5 for ; Fri, 15 May 2020 03:59:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qUYKxeqFmugL2ekLV39MyViS4NNaUTmppS2gbMRIELI=; b=NlJH/3MRJY49M/3njTipBUXnGPJGLor527gHDg7Ccbfp6Fy+WbyyqOQux9LQtXFqP6 +oSKoPOyU8TkDXeD4ohbnat4X3fD0ifjlwT/6b7bMW4Z7dwlwet6iUbspgUkU5VP8uu2 c+yGDkq/0oAX6eNc4t/tj0WGGKVHcYZc5/Fk/Dp+2w4TwIGz0hpXuE+zo3jr0qax3Jo6 BQWs4nyy81XP6sDDH7Of0C+ZjxK4D1s5M2fVlcRgwle4R6ERlMq1VzSWUK6yCsXYVI4f EaffBugUsGqlLdrNSED7kS8Sblh2cnzWxmpvDpQGgqYe3P+4fote97V9Y2vYkCEO07it /AFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qUYKxeqFmugL2ekLV39MyViS4NNaUTmppS2gbMRIELI=; b=QEWWRWVmbb/qsqrVmipucl50uRAvzCd/6oIirXC2eDhueLoG73/6Um4DVVycum+eS1 YF1LKy6ROqX1WUi+YwySI8Upo1D08TipcLDQbeNAM5F02X22f3YS5rdX3BSLe6wmAJcq IaOIo3lG9gxpyJrFm36Gdh8o1/NFAhRk2vq5B9ReJlC6Sycmx0pEGb/FARJuQjdPm2pO nT0hpV/2tQoLAsB/W79hUM3aXzu8dCNNczgUfrk7q+SsgwR/ijeAOezeh0CgN0L1/Uzt IblkTbFFHEnw2D8K9Kzf3Pcpc1VQcLXro+KdbIRezoAEysXUKs/RXfMKTEg8gk/W61B5 LiDQ== X-Gm-Message-State: AOAM5332EGeFn3gVGIafjLWzxPIGNmLjD7+hwT7tvLj21dWyOxnGU2+m BKktTZbWLF4g7TSaPVXKVHjHKg== X-Google-Smtp-Source: ABdhPJz8L+oLnstInjy+qRVkjsl3lHqBMGwOF6aiH2Z0N+sa1ULlAqHKHHZdf97y1h4INqINWTTXqA== X-Received: by 2002:a5d:5686:: with SMTP id f6mr3723358wrv.168.1589540353667; Fri, 15 May 2020 03:59:13 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id r2sm3034272wrg.84.2020.05.15.03.59.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:59:12 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 10/14] arm64: kvm: Split hyp/timer-sr.c to VHE/nVHE Date: Fri, 15 May 2020 11:58:37 +0100 Message-Id: <20200515105841.73532-11-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035916_138903_A388C134 X-CRM114-Status: GOOD ( 17.42 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:441 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. timer-sr.c contains a HVC handler for setting CNTVOFF_EL2 and two helper functions for controlling access to physical counter. The former is shared between VHE/nVHE and is kept in timer-sr.c but compiled under both configs. The latter are nVHE-specific and are moved to nvhe/timer-sr.c. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_hyp.h | 2 ++ arch/arm64/kernel/image-vars.h | 3 --- arch/arm64/kvm/hyp/nvhe/Makefile | 3 ++- arch/arm64/kvm/hyp/nvhe/timer-sr.c | 43 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/timer-sr.c | 36 ------------------------- 5 files changed, 47 insertions(+), 40 deletions(-) create mode 100644 arch/arm64/kvm/hyp/nvhe/timer-sr.c diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 2084fd3186a7..f9fa7fd7a0f3 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -64,8 +64,10 @@ void __vgic_v3_save_aprs(struct kvm_vcpu *vcpu); void __vgic_v3_restore_aprs(struct kvm_vcpu *vcpu); int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu); +#ifdef __KVM_NVHE_HYPERVISOR__ void __timer_enable_traps(struct kvm_vcpu *vcpu); void __timer_disable_traps(struct kvm_vcpu *vcpu); +#endif #ifdef __KVM_NVHE_HYPERVISOR__ void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt); diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index bf9053d65ad7..c16cf4e2cd8b 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -67,9 +67,6 @@ __kvm_nvhe___guest_enter = __guest_enter; __kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___icache_flags = __icache_flags; __kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; -__kvm_nvhe___kvm_timer_set_cntvoff = __kvm_timer_set_cntvoff; -__kvm_nvhe___timer_disable_traps = __timer_disable_traps; -__kvm_nvhe___timer_enable_traps = __timer_enable_traps; __kvm_nvhe___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; __kvm_nvhe___vgic_v3_activate_traps = __vgic_v3_activate_traps; __kvm_nvhe___vgic_v3_deactivate_traps = __vgic_v3_deactivate_traps; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 8157f6fa4c99..a67958f29fd7 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,7 +7,8 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := sysreg-sr.o debug-sr.o switch.o tlb.o ../hyp-entry.o +obj-y := ../timer-sr.o timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o \ + ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/timer-sr.c new file mode 100644 index 000000000000..f0e694743883 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c @@ -0,0 +1,43 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2012-2015 - ARM Ltd + * Author: Marc Zyngier + */ + +#include +#include +#include + +#include + +/* + * Should only be called on non-VHE systems. + * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). + */ +void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) +{ + u64 val; + + /* Allow physical timer/counter access for the host */ + val = read_sysreg(cnthctl_el2); + val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN; + write_sysreg(val, cnthctl_el2); +} + +/* + * Should only be called on non-VHE systems. + * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). + */ +void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu) +{ + u64 val; + + /* + * Disallow physical timer access for the guest + * Physical counter access is allowed + */ + val = read_sysreg(cnthctl_el2); + val &= ~CNTHCTL_EL1PCEN; + val |= CNTHCTL_EL1PCTEN; + write_sysreg(val, cnthctl_el2); +} diff --git a/arch/arm64/kvm/hyp/timer-sr.c b/arch/arm64/kvm/hyp/timer-sr.c index ff76e6845fe4..46e303281a2c 100644 --- a/arch/arm64/kvm/hyp/timer-sr.c +++ b/arch/arm64/kvm/hyp/timer-sr.c @@ -4,10 +4,6 @@ * Author: Marc Zyngier */ -#include -#include -#include - #include void __hyp_text __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high) @@ -15,35 +11,3 @@ void __hyp_text __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high) u64 cntvoff = (u64)cntvoff_high << 32 | cntvoff_low; write_sysreg(cntvoff, cntvoff_el2); } - -/* - * Should only be called on non-VHE systems. - * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). - */ -void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) -{ - u64 val; - - /* Allow physical timer/counter access for the host */ - val = read_sysreg(cnthctl_el2); - val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN; - write_sysreg(val, cnthctl_el2); -} - -/* - * Should only be called on non-VHE systems. - * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). - */ -void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu) -{ - u64 val; - - /* - * Disallow physical timer access for the guest - * Physical counter access is allowed - */ - val = read_sysreg(cnthctl_el2); - val &= ~CNTHCTL_EL1PCEN; - val |= CNTHCTL_EL1PCTEN; - write_sysreg(val, cnthctl_el2); -} From patchwork Fri May 15 10:58:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551211 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8E003913 for ; Fri, 15 May 2020 11:02:19 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6D2AB20709 for ; Fri, 15 May 2020 11:02:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="d8LhVdab"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="MYaMiRiz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D2AB20709 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DL6GR7UqrYgOxE7+YDZjUxWMnzU9hJae8LGDL30zjhw=; b=d8LhVdabGeoTLh niTh0w+6lM4g3yR9SrFf4GJAhAb6oMfZl8DHAgdOMhIo9zwOlcvHZt57bX6c3McjMyY+2snhzIcz3 IfBKi0AbJD4NakUo18Mk6zHPIOcHcvZjg0RyyK5V1QNoIPUMK0QS5Kb0beTvcOjAfSCALT2wxVqSg tn1MMUz2GJi067AlGxSAZKkRyBcYYrP3Y3NCAG8FkUtYBivv92qN7JE2fpIcY70QqQFER5/MyZQjA T/wkdspgNJ3KqBXhUKqIG3rYOmhkC24QCeHTMeQgjvxeWg17wpdZS+qAPJyVDAQkzQWeAEZGGBr+g TSF8rHu1Q1/FpipWV5Sg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY6Z-0007p6-MQ; Fri, 15 May 2020 11:02:15 +0000 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3h-00022D-3E for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:21 +0000 Received: by mail-wr1-x443.google.com with SMTP id k13so916905wrx.3 for ; Fri, 15 May 2020 03:59:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wKY3Sv04y9T25D28x0+yoca4JqoOKU6viRzuCp/o2d8=; b=MYaMiRiz1CtiS090xzgno9gIDJcuB2yEV9pYsHZtfn9MDBV+1Lb61O84bGSxxbGm/3 elvcYsu2EMhv9Gis8QZnBIOfyFjmlnBWT0yDiIl16ZEZyh3Qm2DUSWuuWJSyOUeY583g VlLJUs441j/vco+A7dJ+tQTgMJFr8y3hnQBIXgJ8aKdEObgPU12qX43fP37XyYzPrG1l SBK8QAmP9SNb143vdUN+gyCeM1j9SpSGBwNh2pu1WvbRYKCLTVIlN5u/kP3JR6OXkrIu rx5ygMcBqj5CmvNXSxpp7oj3wxNgnG8OattGZ5qDmnj7RZ8ahWDTu8FsKX32glfhzpzs QbpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wKY3Sv04y9T25D28x0+yoca4JqoOKU6viRzuCp/o2d8=; b=b9bzt6GpSfhZCMHlLQxPtUtzFdWBPOEKJnTd0/HbuhiOJHE5o5N9y/dAx7/UbLnZM0 8u82TVKtBqsm+0TAhmE9T8X4CVpnzHF+fAuu6CByfsG1fWdQmRET9OCd50oxUTJfEHfD 8MB4VRRVnLj5aOI1UPyMAYg3umC6bhsRmYMC0+VpuC99MPbg6WKaNV974M8srgKtW8Q8 b4gvItYq00N6n/HeI3m/hxa+iGyxl5R8NwDWNQ+yA5da2OfCI0GZDtQNBvYxO3B5yTKz MJWfyuMK9GU83M5As1sFKzZ9FmlA3q78XeFeF5KWTFH2bZLBTHRshOdCQBpoyB7Ji6/9 fZfA== X-Gm-Message-State: AOAM530DBXvuQtSHHsMadqOhBfVKsYkP3GHa5JAYvlq4q+KLgcP4J8dj Dd2SiLf31aTVt9Nw19Nm96lbgA== X-Google-Smtp-Source: ABdhPJxDpM4n30e5iBr2RTiZvdPUETO1aVERF22wXT/LtFQlS2FUwWdRvrja8ivqmLjEU8Yx1VWMvA== X-Received: by 2002:a5d:62c7:: with SMTP id o7mr3622526wrv.212.1589540355340; Fri, 15 May 2020 03:59:15 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id l19sm3242185wmj.14.2020.05.15.03.59.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:59:14 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 11/14] arm64: kvm: Compile remaining hyp/ files for both VHE/nVHE Date: Fri, 15 May 2020 11:58:38 +0100 Message-Id: <20200515105841.73532-12-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035917_226310_F51F2DF7 X-CRM114-Status: GOOD ( 12.39 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:443 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. The following files in hyp/ contain only code shared by VHE/nVHE: vgic-v3-sr.c, aarch32.c, vgic-v2-cpuif-proxy.c, entry.S, fpsimd.S Compile them under both configurations. Deletions in image-vars.h reflect eliminated dependencies of nVHE code on the rest of the kernel. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 19 ------------------- arch/arm64/kvm/hyp/nvhe/Makefile | 5 +++-- 2 files changed, 3 insertions(+), 21 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index c16cf4e2cd8b..217e5e5a101d 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,26 +61,8 @@ __efistub__ctype = _ctype; * memory mappings. */ -__kvm_nvhe___fpsimd_restore_state = __fpsimd_restore_state; -__kvm_nvhe___fpsimd_save_state = __fpsimd_save_state; -__kvm_nvhe___guest_enter = __guest_enter; -__kvm_nvhe___guest_exit = __guest_exit; __kvm_nvhe___icache_flags = __icache_flags; __kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; -__kvm_nvhe___vgic_v2_perform_cpuif_access = __vgic_v2_perform_cpuif_access; -__kvm_nvhe___vgic_v3_activate_traps = __vgic_v3_activate_traps; -__kvm_nvhe___vgic_v3_deactivate_traps = __vgic_v3_deactivate_traps; -__kvm_nvhe___vgic_v3_get_ich_vtr_el2 = __vgic_v3_get_ich_vtr_el2; -__kvm_nvhe___vgic_v3_init_lrs = __vgic_v3_init_lrs; -__kvm_nvhe___vgic_v3_perform_cpuif_access = __vgic_v3_perform_cpuif_access; -__kvm_nvhe___vgic_v3_read_vmcr = __vgic_v3_read_vmcr; -__kvm_nvhe___vgic_v3_restore_aprs = __vgic_v3_restore_aprs; -__kvm_nvhe___vgic_v3_restore_state = __vgic_v3_restore_state; -__kvm_nvhe___vgic_v3_save_aprs = __vgic_v3_save_aprs; -__kvm_nvhe___vgic_v3_save_state = __vgic_v3_save_state; -__kvm_nvhe___vgic_v3_write_vmcr = __vgic_v3_write_vmcr; -__kvm_nvhe_abort_guest_exit_end = abort_guest_exit_end; -__kvm_nvhe_abort_guest_exit_start = abort_guest_exit_start; __kvm_nvhe_arm64_const_caps_ready = arm64_const_caps_ready; __kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; __kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; @@ -89,7 +71,6 @@ __kvm_nvhe_cpu_hwcaps = cpu_hwcaps; __kvm_nvhe_kimage_voffset = kimage_voffset; __kvm_nvhe_kvm_host_data = kvm_host_data; __kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; -__kvm_nvhe_kvm_skip_instr32 = kvm_skip_instr32; __kvm_nvhe_kvm_update_va_mask = kvm_update_va_mask; __kvm_nvhe_kvm_vgic_global_state = kvm_vgic_global_state; __kvm_nvhe_panic = panic; diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index a67958f29fd7..819d5271c49a 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -7,8 +7,9 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -fno-stack-protector \ -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) -obj-y := ../timer-sr.o timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o \ - ../hyp-entry.o +obj-y := ../vgic-v3-sr.o ../timer-sr.o timer-sr.o ../aarch32.o \ + ../vgic-v2-cpuif-proxy.o sysreg-sr.o debug-sr.o ../entry.o switch.o \ + ../fpsimd.o tlb.o ../hyp-entry.o obj-y := $(patsubst %.o,%.hyp.o,$(obj-y)) extra-y := $(patsubst %.hyp.o,%.hyp.tmp.o,$(obj-y)) From patchwork Fri May 15 10:58:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551215 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6DDDD618 for ; Fri, 15 May 2020 11:03:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 31A752074D for ; Fri, 15 May 2020 11:03:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="dPBvZS80"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="hem7MA7S" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 31A752074D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kPzM6VSl0fbNZMTlehhvJCZBpwyyu1bO6pcd+XGHEmk=; b=dPBvZS80gnAdpY 0r6zMI0NOxneYwB9l44f5ljzLMBWBY+a/Y3YpBgz2JQ13Tb9vZcePu35vQEAqlX/C7bh7UBMVk/dF EQ5775VsL/CuK6XiUkNb3XZFHkP6V/tneC1hwkAksAuoUMEbrf7uKF2EhOcT1Uyw5hopRGP6Fr5qD D+tldTddXs0ui6Vu09Ds4Y2Mnp1pfuAcJcyB8LI5lFDHm8GAVwmmqufSwryHWPyKOn5ZXxZ6e7Fp0 UGavxkX3HArI3weyW2uG4lTH71WZbIf7pS8XsvWs+tb7ePh+tBf+35MqQ914IA9uig4HhaoaQvObB 2YFZGOzL1cxwmi8MfNwg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY7Y-0000DC-0p; Fri, 15 May 2020 11:03:16 +0000 Received: from mail-wr1-x442.google.com ([2a00:1450:4864:20::442]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3j-00023V-0p for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:23 +0000 Received: by mail-wr1-x442.google.com with SMTP id l17so3033127wrr.4 for ; Fri, 15 May 2020 03:59:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qIsoQzx8QXc1ir+dRugWu1fsq9J9q4alaZWOk635tTE=; b=hem7MA7Smzp8TqztYBuB5W3IWuLc4iqDhkp4S4YrnIqEmIZYd5pPdRcokkZiSQCBXL 58opF2y7yXszqJ3evDxLwos9xZqkclw8sRQh7P0+fHtLZlqrEbIdYFuY2QzUT/3oNp3+ 56D/84m33gXnqSp+cu9AJxgtA7yK1RJAbAO0VQpW+NCdV214WKsw960f4Toq0JR2zORt KAesZDc+Dr2ZIqP8yY5XwJ6nLSGu7pB7APkga9EQl8I78PToaK+HwDB54eqVuf/tRtOS pzcnoFrrzdZU3MnypcCryPp2ei+6BLBHpvUaf/5Qk4Uvog7s+owQl8nsBFbR/RUPagH9 HC7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qIsoQzx8QXc1ir+dRugWu1fsq9J9q4alaZWOk635tTE=; b=rgIsfnlMEmH+Ic81NcjdtYik0ZDNXXV14LO6szp+qY+ebgsKkAJ6vma8dQnOXrm6Fj p0GEZeYdePa6Q0LiCEeqYOBzCABtyBKDO11ybZ7pFck/jSaNomrUbIA8RUVY9dKA4pLV +6CQcwqdCcunOSxfA7u9zOlYDjANDn9Mh/RmUsPQSDhX/k3f4wWaNg+E0ZBAXXPuEY5Q aENGG8LcjIA+hzZoVvdgQPYG7CBCvO7NQwV/jDsDGqdliBaBYAqfTfQrErhyKoBgtZ1l CGl+Khl5sryrg/nsytaGVjovEyiGQeqbJBoaHYO434tcQpiGe4H09V+qcsx3TmD1x5Uf 227g== X-Gm-Message-State: AOAM532SE6yTgPZDXD6wEnfT58whRXcpXtQbeDcJEAXvuMJuY3CFDQ44 1/Mqrn5f/rclOzrWC9yLggRtvQ== X-Google-Smtp-Source: ABdhPJyxwMkIRcpBncYa6eIwpuEAvcR55Rq93vJeaU2e4BAy6oJpiroVdqRq+R0wPBTMFUMxTSk9rw== X-Received: by 2002:adf:fa44:: with SMTP id y4mr3731198wrr.135.1589540357099; Fri, 15 May 2020 03:59:17 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id 128sm3156597wme.39.2020.05.15.03.59.16 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:59:16 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 12/14] arm64: kvm: Add comments around __kvm_nvhe_ symbol aliases Date: Fri, 15 May 2020 11:58:39 +0100 Message-Id: <20200515105841.73532-13-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035919_095853_CFB0B023 X-CRM114-Status: GOOD ( 12.35 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:442 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch is part of a series which builds KVM's non-VHE hyp code separately from VHE and the rest of the kernel. With all source files split between VHE/nVHE, add comments around the list of symbols where nVHE code still links against kernel proper. Split them into groups and explain how each group is currently used. Some of these dependencies will be removed in the future. Signed-off-by: David Brazdil --- arch/arm64/kernel/image-vars.h | 47 ++++++++++++++++++++++------------ 1 file changed, 30 insertions(+), 17 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 217e5e5a101d..0b3a3fe07a64 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -61,23 +61,36 @@ __efistub__ctype = _ctype; * memory mappings. */ -__kvm_nvhe___icache_flags = __icache_flags; -__kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; -__kvm_nvhe_arm64_const_caps_ready = arm64_const_caps_ready; -__kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; -__kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; -__kvm_nvhe_cpu_hwcap_keys = cpu_hwcap_keys; -__kvm_nvhe_cpu_hwcaps = cpu_hwcaps; -__kvm_nvhe_kimage_voffset = kimage_voffset; -__kvm_nvhe_kvm_host_data = kvm_host_data; -__kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; -__kvm_nvhe_kvm_update_va_mask = kvm_update_va_mask; -__kvm_nvhe_kvm_vgic_global_state = kvm_vgic_global_state; -__kvm_nvhe_panic = panic; -__kvm_nvhe_sve_load_state = sve_load_state; -__kvm_nvhe_sve_save_state = sve_save_state; -__kvm_nvhe_vgic_v2_cpuif_trap = vgic_v2_cpuif_trap; -__kvm_nvhe_vgic_v3_cpuif_trap = vgic_v3_cpuif_trap; +/* If nVHE code panics, it ERETs into panic() in EL1. */ +__kvm_nvhe_panic = panic; + +/* Stub HVC IDs are routed to a handler in .hyp.idmap.text. Executed in EL2. */ +__kvm_nvhe___kvm_handle_stub_hvc = __kvm_handle_stub_hvc; + +/* Alternative callbacks, referenced in .altinstructions. Executed in EL1. */ +__kvm_nvhe_arm64_enable_wa2_handling = arm64_enable_wa2_handling; +__kvm_nvhe_kvm_patch_vector_branch = kvm_patch_vector_branch; +__kvm_nvhe_kvm_update_va_mask = kvm_update_va_mask; + +/* Values used to convert between memory mappings, read-only after init. */ +__kvm_nvhe_kimage_voffset = kimage_voffset; + +/* Data shared with the kernel. */ +__kvm_nvhe_cpu_hwcaps = cpu_hwcaps; +__kvm_nvhe_cpu_hwcap_keys = cpu_hwcap_keys; +__kvm_nvhe___icache_flags = __icache_flags; +__kvm_nvhe_kvm_vgic_global_state = kvm_vgic_global_state; +__kvm_nvhe_arm64_ssbd_callback_required = arm64_ssbd_callback_required; +__kvm_nvhe_kvm_host_data = kvm_host_data; + +/* Static keys shared with the kernel. */ +__kvm_nvhe_arm64_const_caps_ready = arm64_const_caps_ready; +__kvm_nvhe_vgic_v2_cpuif_trap = vgic_v2_cpuif_trap; +__kvm_nvhe_vgic_v3_cpuif_trap = vgic_v3_cpuif_trap; + +/* SVE support, currently unused by nVHE. */ +__kvm_nvhe_sve_save_state = sve_save_state; +__kvm_nvhe_sve_load_state = sve_load_state; #endif /* CONFIG_KVM */ From patchwork Fri May 15 10:58:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551221 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BEC86913 for ; Fri, 15 May 2020 11:04:17 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 82D8C20709 for ; Fri, 15 May 2020 11:04:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="T7x53OdA"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="Qpw1PTqy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 82D8C20709 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lp5CNYKS2lpsyS6jrdUFRKeIGa810Oxk259R0+J1iT4=; b=T7x53OdABjVDUW KcABFZkXZ9oJy+3ysrKJJZ2V4ccXbPBECpnYVRk1wpquLC6zSKgR54eiSe6b3daUlfJKYFOam1fr2 0cPiE0+2c2yjEQXDJz0F46TLH0z3Q4ie+wmcwVn7g65zITUmrpx4kb237jkVIYElDP5t7D1NdNVOG P5kE7MRvzx7AjOFf8JdpcEBmzIAmcH7naclU3L1KgKF353wMIsEi3ZKZsJ6SQAT74JwFKJJdcq9Eu KHM97GiFleRUBZK2vPKo+tgmroXK/vU1L6nzkwZrVrhgG1RCviTrxz3DfdabMKtocL+ZkEF+rqLef F3lLggIzTBM7vkJwgfyA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY8U-0000yJ-Qq; Fri, 15 May 2020 11:04:14 +0000 Received: from mail-wr1-x441.google.com ([2a00:1450:4864:20::441]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3l-00025K-3P for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:34 +0000 Received: by mail-wr1-x441.google.com with SMTP id y3so3064935wrt.1 for ; Fri, 15 May 2020 03:59:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZFqKhRhgMYixYu4H2QERxkjVddowWH9ESAQgHvaZZeo=; b=Qpw1PTqy1D4t4y8A4M1VSVEHebf/rd3cRyqaGj4mps77da0a7eMq+UO8n40ZQ9mIDo X3mcM5n2yr0UQn+frnVlXa+d4eTlnPtXAuxSKNhWewPq0QaJa/Gy6W+nqlUqgcPYr9qb LipvYmxfBLP4QLYBny2Plxey5I5kyCb5ZDAuGihk8QYhiBDYsH/fpgXSekwOAaf2sc/E BJvSDU0Fk9Wa+LhjZaEQeZjzHnzm3Q+6IyRP2sfE53BB1AL+N7DjYFTfDtlvVgREzccy Ano4YWoJ0qldhcwGU2ulCNTxdAB7XektNiFBbIYrtHblKXcEZ4gTpmYmE2KalC1Sp9US FdAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZFqKhRhgMYixYu4H2QERxkjVddowWH9ESAQgHvaZZeo=; b=DBttKOS8GttwCptmhlJvL7LpkE6PJ/DYl9fMZE0mFc94EX+MGrZnuA/FL634uCh9Ft rzxg3IpFWRa/fkGwNjPfZsOmyn4Yb1y0EUZNtPZbCZPyNSfCvR6OIV3OwAE3rEcv5lq8 xocldqCCs9UAXftmaY7iT5hyoqfvRTjukMHyeQpt8zaBml2FFrJIpQUecM21d3XMIv0A UQHP0bZNu5ru+ALXqlLdETl8Wimu9+jYwh9adyk1HmB8Q1SOjNk+pYtcUkUjftRbcWDA VycdNfcXiHu1vrWGvgY0u+gvZv0rZu3BC8eCDkx+S1Q5Yi7LftYbfq500FFZLOafcdmI zOCQ== X-Gm-Message-State: AOAM533yttHLAT+NJU2ozA7MvS+V4M230DXLmmOlx2E8LewVb55Z5xpr jTBz57vviXZLD0CCvD36ZxS18g== X-Google-Smtp-Source: ABdhPJzveYi/8/lhS/W2dTNk7BKbHEkeGpJS7VB7W62j2Dw52E9u+s5aYHz2aChsVjg0rJew/8pr2Q== X-Received: by 2002:adf:ffc2:: with SMTP id x2mr3542747wrs.273.1589540358935; Fri, 15 May 2020 03:59:18 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id a8sm3119814wrg.85.2020.05.15.03.59.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:59:18 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 13/14] arm64: kvm: Remove __hyp_text macro, use build rules instead Date: Fri, 15 May 2020 11:58:40 +0100 Message-Id: <20200515105841.73532-14-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035921_475067_C6173BFE X-CRM114-Status: GOOD ( 17.48 ) X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:441 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org With nVHE code now fully separated from the rest of the kernel, the effects of the __hyp_text macro (which had to be applied on all nVHE code) can be achieved with build rules instead. The macro used to: (a) move code to .hyp.text ELF section, now done by renaming .text using `objcopy`, and (b) `notrace` would negate effects of CC_FLAGS_FTRACE, now those flags are erased from KBUILD_CFLAGS (same way as in EFI stub). Note that by removing __hyp_text from code shared with VHE, all VHE code is now compiled into .text and without `notrace`. Use of '.pushsection .hyp.text' removed from assembly files as this is now also covered by the build rules. For MAINTAINERS: if needed to re-run, uses of macro were removed with the following command. Formatting was fixed up manually. find arch/arm64/kvm/hyp -type f -name '*.c' -o -name '*.h' \ -exec sed -i 's/ __hyp_text//g' {} + Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_emulate.h | 2 +- arch/arm64/include/asm/kvm_hyp.h | 4 +- arch/arm64/kvm/hyp/aarch32.c | 6 +- arch/arm64/kvm/hyp/debug-sr.h | 18 ++-- arch/arm64/kvm/hyp/entry.S | 1 - arch/arm64/kvm/hyp/fpsimd.S | 1 - arch/arm64/kvm/hyp/hyp-entry.S | 1 - arch/arm64/kvm/hyp/nvhe/Makefile | 7 +- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 10 +- arch/arm64/kvm/hyp/nvhe/switch.c | 18 ++-- arch/arm64/kvm/hyp/nvhe/sysreg-sr.c | 10 +- arch/arm64/kvm/hyp/nvhe/timer-sr.c | 4 +- arch/arm64/kvm/hyp/nvhe/tlb.c | 14 ++- arch/arm64/kvm/hyp/switch.h | 35 +++--- arch/arm64/kvm/hyp/sysreg-sr.h | 27 ++--- arch/arm64/kvm/hyp/timer-sr.c | 2 +- arch/arm64/kvm/hyp/tlb.c | 6 +- arch/arm64/kvm/hyp/tlb.h | 15 ++- arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c | 4 +- arch/arm64/kvm/hyp/vgic-v3-sr.c | 130 ++++++++++------------- 20 files changed, 141 insertions(+), 174 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index a30b4eec7cb4..1666ecbfaac7 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -520,7 +520,7 @@ static __always_inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_i * Skip an instruction which has been emulated at hyp while most guest sysregs * are live. */ -static __always_inline void __hyp_text __kvm_skip_instr(struct kvm_vcpu *vcpu) +static __always_inline void __kvm_skip_instr(struct kvm_vcpu *vcpu) { *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index f9fa7fd7a0f3..59a037b32c81 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -13,8 +13,6 @@ #include #include -#define __hyp_text __section(.hyp.text) notrace - #define read_sysreg_elx(r,nvh,vh) \ ({ \ u64 reg; \ @@ -103,7 +101,7 @@ void __noreturn __hyp_do_panic(unsigned long, ...); * Must be called from hyp code running at EL2 with an updated VTTBR * and interrupts disabled. */ -static __always_inline void __hyp_text __load_guest_stage2(struct kvm *kvm) +static __always_inline void __load_guest_stage2(struct kvm *kvm) { write_sysreg(kvm->arch.vtcr, vtcr_el2); write_sysreg(kvm_get_vttbr(kvm), vttbr_el2); diff --git a/arch/arm64/kvm/hyp/aarch32.c b/arch/arm64/kvm/hyp/aarch32.c index d31f267961e7..44fecab99bbe 100644 --- a/arch/arm64/kvm/hyp/aarch32.c +++ b/arch/arm64/kvm/hyp/aarch32.c @@ -44,7 +44,7 @@ static const unsigned short cc_map[16] = { /* * Check if a trapped instruction should have been executed or not. */ -bool __hyp_text kvm_condition_valid32(const struct kvm_vcpu *vcpu) +bool kvm_condition_valid32(const struct kvm_vcpu *vcpu) { unsigned long cpsr; u32 cpsr_cond; @@ -93,7 +93,7 @@ bool __hyp_text kvm_condition_valid32(const struct kvm_vcpu *vcpu) * * IT[7:0] -> CPSR[26:25],CPSR[15:10] */ -static void __hyp_text kvm_adjust_itstate(struct kvm_vcpu *vcpu) +static void kvm_adjust_itstate(struct kvm_vcpu *vcpu) { unsigned long itbits, cond; unsigned long cpsr = *vcpu_cpsr(vcpu); @@ -123,7 +123,7 @@ static void __hyp_text kvm_adjust_itstate(struct kvm_vcpu *vcpu) * kvm_skip_instr - skip a trapped instruction and proceed to the next * @vcpu: The vcpu pointer */ -void __hyp_text kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr) +void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr) { bool is_thumb; diff --git a/arch/arm64/kvm/hyp/debug-sr.h b/arch/arm64/kvm/hyp/debug-sr.h index 6a94553493a1..e315a0093b5f 100644 --- a/arch/arm64/kvm/hyp/debug-sr.h +++ b/arch/arm64/kvm/hyp/debug-sr.h @@ -88,9 +88,9 @@ default: write_debug(ptr[0], reg, 0); \ } -static inline void __hyp_text -__debug_save_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) +static inline void __debug_save_state(struct kvm_vcpu *vcpu, + struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) { u64 aa64dfr0; int brps, wrps; @@ -107,9 +107,9 @@ __debug_save_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, ctxt->sys_regs[MDCCINT_EL1] = read_sysreg(mdccint_el1); } -static inline void __hyp_text -__debug_restore_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, - struct kvm_cpu_context *ctxt) +static inline void __debug_restore_state(struct kvm_vcpu *vcpu, + struct kvm_guest_debug_arch *dbg, + struct kvm_cpu_context *ctxt) { u64 aa64dfr0; int brps, wrps; @@ -127,8 +127,7 @@ __debug_restore_state(struct kvm_vcpu *vcpu, struct kvm_guest_debug_arch *dbg, write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1); } -static inline void __hyp_text -__debug_switch_to_guest_common(struct kvm_vcpu *vcpu) +static inline void __debug_switch_to_guest_common(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; @@ -147,8 +146,7 @@ __debug_switch_to_guest_common(struct kvm_vcpu *vcpu) __debug_restore_state(vcpu, guest_dbg, guest_ctxt); } -static inline void __hyp_text -__debug_switch_to_host_common(struct kvm_vcpu *vcpu) +static inline void __debug_switch_to_host_common(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index d22d0534dd60..01b946af75b9 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -20,7 +20,6 @@ #define CPU_XREG_OFFSET(x) CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x) .text - .pushsection .hyp.text, "ax" /* * We treat x18 as callee-saved as the host may use it as a platform diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index 5b8ff517ff10..01f114aa47b0 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -9,7 +9,6 @@ #include .text - .pushsection .hyp.text, "ax" SYM_FUNC_START(__fpsimd_save_state) fpsimd_save x0, 1 diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 7868f78b197a..cb2c5c0a76bd 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -17,7 +17,6 @@ #include .text - .pushsection .hyp.text, "ax" el1_sync: // Guest trapped into EL2 diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 819d5271c49a..057c534c33b9 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -22,7 +22,12 @@ $(obj)/%.hyp.o: $(obj)/%.hyp.tmp.o FORCE $(call if_changed,hypcopy) quiet_cmd_hypcopy = HYPCOPY $@ - cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__kvm_nvhe_ $< $@ + cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__kvm_nvhe_ \ + --rename-section=.text=.hyp.text \ + $< $@ + +# Remove ftrace CFLAGS, this is equivalent to the 'notrace' annotation. +KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) # KVM nVHE code is run at a different exception code with a different map, so # compiler instrumentation that inserts callbacks or checks into the code may diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index b3752cfdcf3d..bb5c529da394 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -14,7 +14,7 @@ #include "../debug-sr.h" -static void __hyp_text __debug_save_spe(u64 *pmscr_el1) +static void __debug_save_spe(u64 *pmscr_el1) { u64 reg; @@ -46,7 +46,7 @@ static void __hyp_text __debug_save_spe(u64 *pmscr_el1) dsb(nsh); } -static void __hyp_text __debug_restore_spe(u64 pmscr_el1) +static void __debug_restore_spe(u64 pmscr_el1) { if (!pmscr_el1) return; @@ -58,20 +58,20 @@ static void __hyp_text __debug_restore_spe(u64 pmscr_el1) write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1); } -void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) +void __debug_switch_to_guest(struct kvm_vcpu *vcpu) { /* Disable and flush SPE data generation */ __debug_save_spe(&vcpu->arch.host_debug_state.pmscr_el1); __debug_switch_to_guest_common(vcpu); } -void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) +void __debug_switch_to_host(struct kvm_vcpu *vcpu) { __debug_restore_spe(vcpu->arch.host_debug_state.pmscr_el1); __debug_switch_to_host_common(vcpu); } -u32 __hyp_text __kvm_get_mdcr_el2(void) +u32 __kvm_get_mdcr_el2(void) { return read_sysreg(mdcr_el2); } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 4294beed3dc1..ffea4efe8d92 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -26,7 +26,7 @@ #include "../switch.h" -static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) +static void __activate_traps(struct kvm_vcpu *vcpu) { u64 val; @@ -57,7 +57,7 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) } } -static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) +static void __deactivate_traps(struct kvm_vcpu *vcpu) { u64 mdcr_el2; @@ -92,13 +92,13 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); } -static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu) +static void __deactivate_vm(struct kvm_vcpu *vcpu) { write_sysreg(0, vttbr_el2); } /* Save VGICv3 state on non-VHE systems */ -static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) +static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { __vgic_v3_save_state(vcpu); @@ -107,7 +107,7 @@ static void __hyp_text __hyp_vgic_save_state(struct kvm_vcpu *vcpu) } /* Restore VGICv3 state on non_VEH systems */ -static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) +static void __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) { if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) { __vgic_v3_activate_traps(vcpu); @@ -118,7 +118,7 @@ static void __hyp_text __hyp_vgic_restore_state(struct kvm_vcpu *vcpu) /** * Disable host events, enable guest events */ -static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) +static bool __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) { struct kvm_host_data *host; struct kvm_pmu_events *pmu; @@ -138,7 +138,7 @@ static bool __hyp_text __pmu_switch_to_guest(struct kvm_cpu_context *host_ctxt) /** * Disable guest events, enable host events */ -static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) +static void __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) { struct kvm_host_data *host; struct kvm_pmu_events *pmu; @@ -154,7 +154,7 @@ static void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context *host_ctxt) } /* Switch to the guest for legacy non-VHE systems */ -int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) +int __kvm_vcpu_run(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; @@ -241,7 +241,7 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) return exit_code; } -void __hyp_text __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) +void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt) { u64 spsr = read_sysreg_el2(SYS_SPSR); u64 elr = read_sysreg_el2(SYS_ELR); diff --git a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c index 55ab924d841a..b1da891bf307 100644 --- a/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/sysreg-sr.c @@ -18,7 +18,7 @@ * Non-VHE: Both host and guest must save everything. */ -void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) +void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) { __sysreg_save_el1_state(ctxt); __sysreg_save_common_state(ctxt); @@ -26,7 +26,7 @@ void __hyp_text __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt) __sysreg_save_el2_return_state(ctxt); } -void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) +void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) { __sysreg_restore_el1_state(ctxt); __sysreg_restore_common_state(ctxt); @@ -34,17 +34,17 @@ void __hyp_text __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt) __sysreg_restore_el2_return_state(ctxt); } -void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu) +void __sysreg32_save_state(struct kvm_vcpu *vcpu) { ___sysreg32_save_state(vcpu); } -void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) +void __sysreg32_restore_state(struct kvm_vcpu *vcpu) { ___sysreg32_restore_state(vcpu); } -void __hyp_text __kvm_enable_ssbs(void) +void __kvm_enable_ssbs(void) { u64 tmp; diff --git a/arch/arm64/kvm/hyp/nvhe/timer-sr.c b/arch/arm64/kvm/hyp/nvhe/timer-sr.c index f0e694743883..8b80a4c4c4c6 100644 --- a/arch/arm64/kvm/hyp/nvhe/timer-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/timer-sr.c @@ -14,7 +14,7 @@ * Should only be called on non-VHE systems. * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). */ -void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) +void __timer_disable_traps(struct kvm_vcpu *vcpu) { u64 val; @@ -28,7 +28,7 @@ void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) * Should only be called on non-VHE systems. * VHE systems use EL2 timers and configure EL1 timers in kvm_timer_init_vhe(). */ -void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu) +void __timer_enable_traps(struct kvm_vcpu *vcpu) { u64 val; diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c index 1b8f4000f98c..151fc9cc2553 100644 --- a/arch/arm64/kvm/hyp/nvhe/tlb.c +++ b/arch/arm64/kvm/hyp/nvhe/tlb.c @@ -12,8 +12,7 @@ #include "../tlb.h" -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_guest(struct kvm *kvm, struct tlb_inv_context *cxt) { if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT_NVHE)) { u64 val; @@ -35,8 +34,7 @@ static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, isb(); } -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_host(struct kvm *kvm, struct tlb_inv_context *cxt) { write_sysreg(0, vttbr_el2); @@ -48,22 +46,22 @@ static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, } } -void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { __tlb_flush_vmid_ipa(kvm, ipa); } -void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm) +void __kvm_tlb_flush_vmid(struct kvm *kvm) { __tlb_flush_vmid(kvm); } -void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) { __tlb_flush_local_vmid(vcpu); } -void __hyp_text __kvm_flush_vm_context(void) +void __kvm_flush_vm_context(void) { __tlb_flush_vm_context(); } diff --git a/arch/arm64/kvm/hyp/switch.h b/arch/arm64/kvm/hyp/switch.h index 0ce8185e26db..92a5ab1564b0 100644 --- a/arch/arm64/kvm/hyp/switch.h +++ b/arch/arm64/kvm/hyp/switch.h @@ -30,7 +30,7 @@ static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n"; /* Check whether the FP regs were dirtied while in the host-side run loop: */ -static inline bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) +static inline bool update_fp_enabled(struct kvm_vcpu *vcpu) { /* * When the system doesn't support FP/SIMD, we cannot rely on @@ -48,7 +48,7 @@ static inline bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) } /* Save the 32-bit only FPSIMD system register state */ -static inline void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) +static inline void __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) { if (!vcpu_el1_is_32bit(vcpu)) return; @@ -56,7 +56,7 @@ static inline void __hyp_text __fpsimd_save_fpexc32(struct kvm_vcpu *vcpu) vcpu->arch.ctxt.sys_regs[FPEXC32_EL2] = read_sysreg(fpexc32_el2); } -static inline void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) +static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) { /* * We are about to set CPTR_EL2.TFP to trap all floating point @@ -73,7 +73,7 @@ static inline void __hyp_text __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) } } -static inline void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) +static inline void __activate_traps_common(struct kvm_vcpu *vcpu) { /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ write_sysreg(1 << 15, hstr_el2); @@ -89,13 +89,13 @@ static inline void __hyp_text __activate_traps_common(struct kvm_vcpu *vcpu) write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); } -static inline void __hyp_text __deactivate_traps_common(void) +static inline void __deactivate_traps_common(void) { write_sysreg(0, hstr_el2); write_sysreg(0, pmuserenr_el0); } -static inline void __hyp_text ___activate_traps(struct kvm_vcpu *vcpu) +static inline void ___activate_traps(struct kvm_vcpu *vcpu) { u64 hcr = vcpu->arch.hcr_el2; @@ -108,7 +108,7 @@ static inline void __hyp_text ___activate_traps(struct kvm_vcpu *vcpu) write_sysreg_s(vcpu->arch.vsesr_el2, SYS_VSESR_EL2); } -static inline void __hyp_text ___deactivate_traps(struct kvm_vcpu *vcpu) +static inline void ___deactivate_traps(struct kvm_vcpu *vcpu) { /* * If we pended a virtual abort, preserve it until it gets @@ -122,12 +122,12 @@ static inline void __hyp_text ___deactivate_traps(struct kvm_vcpu *vcpu) } } -static inline void __hyp_text __activate_vm(struct kvm *kvm) +static inline void __activate_vm(struct kvm *kvm) { __load_guest_stage2(kvm); } -static inline bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) +static inline bool __translate_far_to_hpfar(u64 far, u64 *hpfar) { u64 par, tmp; @@ -156,7 +156,7 @@ static inline bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar) return true; } -static inline bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) +static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) { u8 ec; u64 esr; @@ -196,7 +196,7 @@ static inline bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) } /* Check for an FPSIMD/SVE trap and handle as appropriate */ -static inline bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) +static inline bool __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { bool vhe, sve_guest, sve_host; u8 hsr_ec; @@ -278,7 +278,7 @@ static inline bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) return true; } -static inline bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) +static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) { u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_hsr(vcpu)); int rt = kvm_vcpu_sys_get_rt(vcpu); @@ -338,8 +338,7 @@ static inline bool __hyp_text handle_tx2_tvm(struct kvm_vcpu *vcpu) * the guest, false when we should restore the host state and return to the * main run loop. */ -static inline bool __hyp_text -fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) +static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) { if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); @@ -408,7 +407,7 @@ fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) return false; } -static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) +static inline bool __needs_ssbd_off(struct kvm_vcpu *vcpu) { if (!cpus_have_final_cap(ARM64_SSBD)) return false; @@ -416,8 +415,7 @@ static inline bool __hyp_text __needs_ssbd_off(struct kvm_vcpu *vcpu) return !(vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG); } -static inline void __hyp_text -__set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) +static inline void __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) { #ifdef CONFIG_ARM64_SSBD /* @@ -430,8 +428,7 @@ __set_guest_arch_workaround_state(struct kvm_vcpu *vcpu) #endif } -static inline void __hyp_text -__set_host_arch_workaround_state(struct kvm_vcpu *vcpu) +static inline void __set_host_arch_workaround_state(struct kvm_vcpu *vcpu) { #ifdef CONFIG_ARM64_SSBD /* diff --git a/arch/arm64/kvm/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/sysreg-sr.h index 2e22cf23dbd5..c4860ee3117a 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/sysreg-sr.h @@ -15,8 +15,7 @@ #include #include -static inline void __hyp_text -__sysreg_save_common_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) { ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1); @@ -27,15 +26,13 @@ __sysreg_save_common_state(struct kvm_cpu_context *ctxt) ctxt->gp_regs.regs.sp = read_sysreg(sp_el0); } -static inline void __hyp_text -__sysreg_save_user_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) { ctxt->sys_regs[TPIDR_EL0] = read_sysreg(tpidr_el0); ctxt->sys_regs[TPIDRRO_EL0] = read_sysreg(tpidrro_el0); } -static inline void __hyp_text -__sysreg_save_el1_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) { ctxt->sys_regs[CSSELR_EL1] = read_sysreg(csselr_el1); ctxt->sys_regs[SCTLR_EL1] = read_sysreg_el1(SYS_SCTLR); @@ -61,8 +58,7 @@ __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) ctxt->gp_regs.spsr[KVM_SPSR_EL1]= read_sysreg_el1(SYS_SPSR); } -static inline void __hyp_text -__sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) { ctxt->gp_regs.regs.pc = read_sysreg_el2(SYS_ELR); ctxt->gp_regs.regs.pstate = read_sysreg_el2(SYS_SPSR); @@ -71,8 +67,7 @@ __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) ctxt->sys_regs[DISR_EL1] = read_sysreg_s(SYS_VDISR_EL2); } -static inline void __hyp_text -__sysreg_restore_common_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_restore_common_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1); @@ -83,15 +78,13 @@ __sysreg_restore_common_state(struct kvm_cpu_context *ctxt) write_sysreg(ctxt->gp_regs.regs.sp, sp_el0); } -static inline void __hyp_text -__sysreg_restore_user_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt->sys_regs[TPIDR_EL0], tpidr_el0); write_sysreg(ctxt->sys_regs[TPIDRRO_EL0], tpidrro_el0); } -static inline void __hyp_text -__sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) +static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt->sys_regs[MPIDR_EL1], vmpidr_el2); write_sysreg(ctxt->sys_regs[CSSELR_EL1], csselr_el1); @@ -149,7 +142,7 @@ __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) write_sysreg_el1(ctxt->gp_regs.spsr[KVM_SPSR_EL1],SYS_SPSR); } -static inline void __hyp_text +static inline void __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) { u64 pstate = ctxt->gp_regs.regs.pstate; @@ -176,7 +169,7 @@ __sysreg_restore_el2_return_state(struct kvm_cpu_context *ctxt) write_sysreg_s(ctxt->sys_regs[DISR_EL1], SYS_VDISR_EL2); } -static inline void __hyp_text ___sysreg32_save_state(struct kvm_vcpu *vcpu) +static inline void ___sysreg32_save_state(struct kvm_vcpu *vcpu) { u64 *spsr, *sysreg; @@ -198,7 +191,7 @@ static inline void __hyp_text ___sysreg32_save_state(struct kvm_vcpu *vcpu) sysreg[DBGVCR32_EL2] = read_sysreg(dbgvcr32_el2); } -static inline void __hyp_text ___sysreg32_restore_state(struct kvm_vcpu *vcpu) +static inline void ___sysreg32_restore_state(struct kvm_vcpu *vcpu) { u64 *spsr, *sysreg; diff --git a/arch/arm64/kvm/hyp/timer-sr.c b/arch/arm64/kvm/hyp/timer-sr.c index 46e303281a2c..ab4b2a214309 100644 --- a/arch/arm64/kvm/hyp/timer-sr.c +++ b/arch/arm64/kvm/hyp/timer-sr.c @@ -6,7 +6,7 @@ #include -void __hyp_text __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high) +void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high) { u64 cntvoff = (u64)cntvoff_high << 32 | cntvoff_low; write_sysreg(cntvoff, cntvoff_el2); diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c index ab55b0c4a80c..d39fa06fdfe8 100644 --- a/arch/arm64/kvm/hyp/tlb.c +++ b/arch/arm64/kvm/hyp/tlb.c @@ -12,8 +12,7 @@ #include "tlb.h" -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_guest(struct kvm *kvm, struct tlb_inv_context *cxt) { u64 val; @@ -56,8 +55,7 @@ static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, isb(); } -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt) +static void __tlb_switch_to_host(struct kvm *kvm, struct tlb_inv_context *cxt) { /* * We're done with the TLB operation, let's restore the host's diff --git a/arch/arm64/kvm/hyp/tlb.h b/arch/arm64/kvm/hyp/tlb.h index 841ef400c8ec..25dba94d3f51 100644 --- a/arch/arm64/kvm/hyp/tlb.h +++ b/arch/arm64/kvm/hyp/tlb.h @@ -19,13 +19,10 @@ struct tlb_inv_context { u64 sctlr; }; -static void __hyp_text __tlb_switch_to_guest(struct kvm *kvm, - struct tlb_inv_context *cxt); -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, - struct tlb_inv_context *cxt); +static void __tlb_switch_to_guest(struct kvm *kvm, struct tlb_inv_context *cxt); +static void __tlb_switch_to_host(struct kvm *kvm, struct tlb_inv_context *cxt); -static inline void __hyp_text -__tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) +static inline void __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) { struct tlb_inv_context cxt; @@ -79,7 +76,7 @@ __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) __tlb_switch_to_host(kvm, &cxt); } -static inline void __hyp_text __tlb_flush_vmid(struct kvm *kvm) +static inline void __tlb_flush_vmid(struct kvm *kvm) { struct tlb_inv_context cxt; @@ -96,7 +93,7 @@ static inline void __hyp_text __tlb_flush_vmid(struct kvm *kvm) __tlb_switch_to_host(kvm, &cxt); } -static inline void __hyp_text __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) +static inline void __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) { struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); struct tlb_inv_context cxt; @@ -111,7 +108,7 @@ static inline void __hyp_text __tlb_flush_local_vmid(struct kvm_vcpu *vcpu) __tlb_switch_to_host(kvm, &cxt); } -static inline void __hyp_text __tlb_flush_vm_context(void) +static inline void __tlb_flush_vm_context(void) { dsb(ishst); __tlbi(alle1is); diff --git a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c index 4f3a087e36d5..bd1bab551d48 100644 --- a/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c +++ b/arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c @@ -13,7 +13,7 @@ #include #include -static bool __hyp_text __is_be(struct kvm_vcpu *vcpu) +static bool __is_be(struct kvm_vcpu *vcpu) { if (vcpu_mode_is_32bit(vcpu)) return !!(read_sysreg_el2(SYS_SPSR) & PSR_AA32_E_BIT); @@ -32,7 +32,7 @@ static bool __hyp_text __is_be(struct kvm_vcpu *vcpu) * 0: Not a GICV access * -1: Illegal GICV access successfully performed */ -int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) +int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu) { struct kvm *kvm = kern_hyp_va(vcpu->kvm); struct vgic_dist *vgic = &kvm->arch.vgic; diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c index 49fedf6710f9..d6628573b855 100644 --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c @@ -16,7 +16,7 @@ #define vtr_to_nr_pre_bits(v) ((((u32)(v) >> 26) & 7) + 1) #define vtr_to_nr_apr_regs(v) (1 << (vtr_to_nr_pre_bits(v) - 5)) -static u64 __hyp_text __gic_v3_get_lr(unsigned int lr) +static u64 __gic_v3_get_lr(unsigned int lr) { switch (lr & 0xf) { case 0: @@ -56,7 +56,7 @@ static u64 __hyp_text __gic_v3_get_lr(unsigned int lr) unreachable(); } -static void __hyp_text __gic_v3_set_lr(u64 val, int lr) +static void __gic_v3_set_lr(u64 val, int lr) { switch (lr & 0xf) { case 0: @@ -110,7 +110,7 @@ static void __hyp_text __gic_v3_set_lr(u64 val, int lr) } } -static void __hyp_text __vgic_v3_write_ap0rn(u32 val, int n) +static void __vgic_v3_write_ap0rn(u32 val, int n) { switch (n) { case 0: @@ -128,7 +128,7 @@ static void __hyp_text __vgic_v3_write_ap0rn(u32 val, int n) } } -static void __hyp_text __vgic_v3_write_ap1rn(u32 val, int n) +static void __vgic_v3_write_ap1rn(u32 val, int n) { switch (n) { case 0: @@ -146,7 +146,7 @@ static void __hyp_text __vgic_v3_write_ap1rn(u32 val, int n) } } -static u32 __hyp_text __vgic_v3_read_ap0rn(int n) +static u32 __vgic_v3_read_ap0rn(int n) { u32 val; @@ -170,7 +170,7 @@ static u32 __hyp_text __vgic_v3_read_ap0rn(int n) return val; } -static u32 __hyp_text __vgic_v3_read_ap1rn(int n) +static u32 __vgic_v3_read_ap1rn(int n) { u32 val; @@ -194,7 +194,7 @@ static u32 __hyp_text __vgic_v3_read_ap1rn(int n) return val; } -void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu) +void __vgic_v3_save_state(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; @@ -230,7 +230,7 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu) } } -void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu) +void __vgic_v3_restore_state(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; @@ -257,7 +257,7 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu) } } -void __hyp_text __vgic_v3_activate_traps(struct kvm_vcpu *vcpu) +void __vgic_v3_activate_traps(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; @@ -306,7 +306,7 @@ void __hyp_text __vgic_v3_activate_traps(struct kvm_vcpu *vcpu) write_gicreg(cpu_if->vgic_hcr, ICH_HCR_EL2); } -void __hyp_text __vgic_v3_deactivate_traps(struct kvm_vcpu *vcpu) +void __vgic_v3_deactivate_traps(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; u64 val; @@ -333,7 +333,7 @@ void __hyp_text __vgic_v3_deactivate_traps(struct kvm_vcpu *vcpu) write_gicreg(0, ICH_HCR_EL2); } -void __hyp_text __vgic_v3_save_aprs(struct kvm_vcpu *vcpu) +void __vgic_v3_save_aprs(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if; u64 val; @@ -370,7 +370,7 @@ void __hyp_text __vgic_v3_save_aprs(struct kvm_vcpu *vcpu) } } -void __hyp_text __vgic_v3_restore_aprs(struct kvm_vcpu *vcpu) +void __vgic_v3_restore_aprs(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if; u64 val; @@ -407,7 +407,7 @@ void __hyp_text __vgic_v3_restore_aprs(struct kvm_vcpu *vcpu) } } -void __hyp_text __vgic_v3_init_lrs(void) +void __vgic_v3_init_lrs(void) { int max_lr_idx = vtr_to_max_lr_idx(read_gicreg(ICH_VTR_EL2)); int i; @@ -416,28 +416,28 @@ void __hyp_text __vgic_v3_init_lrs(void) __gic_v3_set_lr(0, i); } -u64 __hyp_text __vgic_v3_get_ich_vtr_el2(void) +u64 __vgic_v3_get_ich_vtr_el2(void) { return read_gicreg(ICH_VTR_EL2); } -u64 __hyp_text __vgic_v3_read_vmcr(void) +u64 __vgic_v3_read_vmcr(void) { return read_gicreg(ICH_VMCR_EL2); } -void __hyp_text __vgic_v3_write_vmcr(u32 vmcr) +void __vgic_v3_write_vmcr(u32 vmcr) { write_gicreg(vmcr, ICH_VMCR_EL2); } -static int __hyp_text __vgic_v3_bpr_min(void) +static int __vgic_v3_bpr_min(void) { /* See Pseudocode for VPriorityGroup */ return 8 - vtr_to_nr_pre_bits(read_gicreg(ICH_VTR_EL2)); } -static int __hyp_text __vgic_v3_get_group(struct kvm_vcpu *vcpu) +static int __vgic_v3_get_group(struct kvm_vcpu *vcpu) { u32 esr = kvm_vcpu_get_hsr(vcpu); u8 crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT; @@ -447,9 +447,8 @@ static int __hyp_text __vgic_v3_get_group(struct kvm_vcpu *vcpu) #define GICv3_IDLE_PRIORITY 0xff -static int __hyp_text __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, - u32 vmcr, - u64 *lr_val) +static int __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, u32 vmcr, + u64 *lr_val) { unsigned int used_lrs = vcpu->arch.vgic_cpu.used_lrs; u8 priority = GICv3_IDLE_PRIORITY; @@ -487,8 +486,8 @@ static int __hyp_text __vgic_v3_highest_priority_lr(struct kvm_vcpu *vcpu, return lr; } -static int __hyp_text __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, - int intid, u64 *lr_val) +static int __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, int intid, + u64 *lr_val) { unsigned int used_lrs = vcpu->arch.vgic_cpu.used_lrs; int i; @@ -507,7 +506,7 @@ static int __hyp_text __vgic_v3_find_active_lr(struct kvm_vcpu *vcpu, return -1; } -static int __hyp_text __vgic_v3_get_highest_active_priority(void) +static int __vgic_v3_get_highest_active_priority(void) { u8 nr_apr_regs = vtr_to_nr_apr_regs(read_gicreg(ICH_VTR_EL2)); u32 hap = 0; @@ -539,12 +538,12 @@ static int __hyp_text __vgic_v3_get_highest_active_priority(void) return GICv3_IDLE_PRIORITY; } -static unsigned int __hyp_text __vgic_v3_get_bpr0(u32 vmcr) +static unsigned int __vgic_v3_get_bpr0(u32 vmcr) { return (vmcr & ICH_VMCR_BPR0_MASK) >> ICH_VMCR_BPR0_SHIFT; } -static unsigned int __hyp_text __vgic_v3_get_bpr1(u32 vmcr) +static unsigned int __vgic_v3_get_bpr1(u32 vmcr) { unsigned int bpr; @@ -563,7 +562,7 @@ static unsigned int __hyp_text __vgic_v3_get_bpr1(u32 vmcr) * Convert a priority to a preemption level, taking the relevant BPR * into account by zeroing the sub-priority bits. */ -static u8 __hyp_text __vgic_v3_pri_to_pre(u8 pri, u32 vmcr, int grp) +static u8 __vgic_v3_pri_to_pre(u8 pri, u32 vmcr, int grp) { unsigned int bpr; @@ -581,7 +580,7 @@ static u8 __hyp_text __vgic_v3_pri_to_pre(u8 pri, u32 vmcr, int grp) * matter what the guest does with its BPR, we can always set/get the * same value of a priority. */ -static void __hyp_text __vgic_v3_set_active_priority(u8 pri, u32 vmcr, int grp) +static void __vgic_v3_set_active_priority(u8 pri, u32 vmcr, int grp) { u8 pre, ap; u32 val; @@ -600,7 +599,7 @@ static void __hyp_text __vgic_v3_set_active_priority(u8 pri, u32 vmcr, int grp) } } -static int __hyp_text __vgic_v3_clear_highest_active_priority(void) +static int __vgic_v3_clear_highest_active_priority(void) { u8 nr_apr_regs = vtr_to_nr_apr_regs(read_gicreg(ICH_VTR_EL2)); u32 hap = 0; @@ -638,7 +637,7 @@ static int __hyp_text __vgic_v3_clear_highest_active_priority(void) return GICv3_IDLE_PRIORITY; } -static void __hyp_text __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 lr_val; u8 lr_prio, pmr; @@ -674,7 +673,7 @@ static void __hyp_text __vgic_v3_read_iar(struct kvm_vcpu *vcpu, u32 vmcr, int r vcpu_set_reg(vcpu, rt, ICC_IAR1_EL1_SPURIOUS); } -static void __hyp_text __vgic_v3_clear_active_lr(int lr, u64 lr_val) +static void __vgic_v3_clear_active_lr(int lr, u64 lr_val) { lr_val &= ~ICH_LR_ACTIVE_BIT; if (lr_val & ICH_LR_HW) { @@ -687,7 +686,7 @@ static void __hyp_text __vgic_v3_clear_active_lr(int lr, u64 lr_val) __gic_v3_set_lr(lr_val, lr); } -static void __hyp_text __vgic_v3_bump_eoicount(void) +static void __vgic_v3_bump_eoicount(void) { u32 hcr; @@ -696,8 +695,7 @@ static void __hyp_text __vgic_v3_bump_eoicount(void) write_gicreg(hcr, ICH_HCR_EL2); } -static void __hyp_text __vgic_v3_write_dir(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_dir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 vid = vcpu_get_reg(vcpu, rt); u64 lr_val; @@ -720,7 +718,7 @@ static void __hyp_text __vgic_v3_write_dir(struct kvm_vcpu *vcpu, __vgic_v3_clear_active_lr(lr, lr_val); } -static void __hyp_text __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 vid = vcpu_get_reg(vcpu, rt); u64 lr_val; @@ -757,17 +755,17 @@ static void __hyp_text __vgic_v3_write_eoir(struct kvm_vcpu *vcpu, u32 vmcr, int __vgic_v3_clear_active_lr(lr, lr_val); } -static void __hyp_text __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG0_MASK)); } -static void __hyp_text __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, !!(vmcr & ICH_VMCR_ENG1_MASK)); } -static void __hyp_text __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); @@ -779,7 +777,7 @@ static void __hyp_text __vgic_v3_write_igrpen0(struct kvm_vcpu *vcpu, u32 vmcr, __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); @@ -791,17 +789,17 @@ static void __hyp_text __vgic_v3_write_igrpen1(struct kvm_vcpu *vcpu, u32 vmcr, __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr0(vmcr)); } -static void __hyp_text __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_read_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vcpu_set_reg(vcpu, rt, __vgic_v3_get_bpr1(vmcr)); } -static void __hyp_text __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); u8 bpr_min = __vgic_v3_bpr_min() - 1; @@ -818,7 +816,7 @@ static void __hyp_text __vgic_v3_write_bpr0(struct kvm_vcpu *vcpu, u32 vmcr, int __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) +static void __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 val = vcpu_get_reg(vcpu, rt); u8 bpr_min = __vgic_v3_bpr_min(); @@ -838,7 +836,7 @@ static void __hyp_text __vgic_v3_write_bpr1(struct kvm_vcpu *vcpu, u32 vmcr, int __vgic_v3_write_vmcr(vmcr); } -static void __hyp_text __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { u32 val; @@ -850,7 +848,7 @@ static void __hyp_text __vgic_v3_read_apxrn(struct kvm_vcpu *vcpu, int rt, int n vcpu_set_reg(vcpu, rt, val); } -static void __hyp_text __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) +static void __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int n) { u32 val = vcpu_get_reg(vcpu, rt); @@ -860,56 +858,49 @@ static void __hyp_text __vgic_v3_write_apxrn(struct kvm_vcpu *vcpu, int rt, int __vgic_v3_write_ap1rn(val, n); } -static void __hyp_text __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 0); } -static void __hyp_text __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, +static void __vgic_v3_read_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 1); } -static void __hyp_text __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 2); } -static void __hyp_text __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_read_apxrn(vcpu, rt, 3); } -static void __hyp_text __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr0(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 0); } -static void __hyp_text __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr1(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 1); } -static void __hyp_text __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr2(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 2); } -static void __hyp_text __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_apxr3(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { __vgic_v3_write_apxrn(vcpu, rt, 3); } -static void __hyp_text __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u64 lr_val; int lr, lr_grp, grp; @@ -928,16 +919,14 @@ static void __hyp_text __vgic_v3_read_hppir(struct kvm_vcpu *vcpu, vcpu_set_reg(vcpu, rt, lr_val & ICH_LR_VIRTUAL_ID_MASK); } -static void __hyp_text __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { vmcr &= ICH_VMCR_PMR_MASK; vmcr >>= ICH_VMCR_PMR_SHIFT; vcpu_set_reg(vcpu, rt, vmcr); } -static void __hyp_text __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 val = vcpu_get_reg(vcpu, rt); @@ -949,15 +938,13 @@ static void __hyp_text __vgic_v3_write_pmr(struct kvm_vcpu *vcpu, write_gicreg(vmcr, ICH_VMCR_EL2); } -static void __hyp_text __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_rpr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 val = __vgic_v3_get_highest_active_priority(); vcpu_set_reg(vcpu, rt, val); } -static void __hyp_text __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 vtr, val; @@ -978,8 +965,7 @@ static void __hyp_text __vgic_v3_read_ctlr(struct kvm_vcpu *vcpu, vcpu_set_reg(vcpu, rt, val); } -static void __hyp_text __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, - u32 vmcr, int rt) +static void __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, u32 vmcr, int rt) { u32 val = vcpu_get_reg(vcpu, rt); @@ -996,7 +982,7 @@ static void __hyp_text __vgic_v3_write_ctlr(struct kvm_vcpu *vcpu, write_gicreg(vmcr, ICH_VMCR_EL2); } -int __hyp_text __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) +int __vgic_v3_perform_cpuif_access(struct kvm_vcpu *vcpu) { int rt; u32 esr; From patchwork Fri May 15 10:58:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11551219 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7893F618 for ; Fri, 15 May 2020 11:04:02 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 57B3C20709 for ; Fri, 15 May 2020 11:04:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="T3k5nh29"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="tqre/qi7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 57B3C20709 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=dyene1VB2LbPIC13S0OlwQJunJghfmyH9WRyzGhQrhs=; b=T3k5nh291mIwfT /t4wEPKhANNuma5UiEUGDgRuAEjHvGBSJLCBg+z+Ow2k1hNNyzgvn98K2qN3Wh6LvFglZlhwWyrEs JtFzusfQBfsxak/gX/oTjv7nZwcn4pYT9HMC+CXI5kHMYI8akDQLUNefK7S3MuvnjskimC51cF7FA 9WpbVRgxNNW62E954J8ZG9vKubWYJTt/Udz6l7UtWwfUb4RE/PzpBY5dmps4rtx2WtzRpdvhMk3jU oZpUwrd+1wvOpHByh7Ah3filOHXfPt1PaJuxXEKhOs33gsh7xd7UlPnKq5WozVnYHVG/BdEBfjj2g 5qRhrWuGoJUnuCRXnDCQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY8A-0000hL-25; Fri, 15 May 2020 11:03:54 +0000 Received: from mail-wr1-x443.google.com ([2a00:1450:4864:20::443]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZY3m-000264-JO for linux-arm-kernel@lists.infradead.org; Fri, 15 May 2020 10:59:29 +0000 Received: by mail-wr1-x443.google.com with SMTP id 50so2980375wrc.11 for ; Fri, 15 May 2020 03:59:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sW/5ZwfUkcrq8MNsWbZB8lRN7UAKxOnejMG35fACcpU=; b=tqre/qi7LotrlOupGSzYt/UzPtBy6RKj8xVepwDzpdtgmD2j1I7oQ4kcPPdkV2ZGOJ hkb6qLAW45uzBZeohJSLIXIGRY/7SVGY1leLVFJVuP4CMq9xSsZjxdeS5vfPPIJEl/y4 MHUYKk1DPQdCyvYh7bT9gvsdjIWCHy1cgZBKcG/RbceKT+P2dXPlyzEVONehZKyuPUXL 388pp6M15+gpipCoh7DAoq00mt0q5cZfUMfLfunZYQv4+z9cvbDoEj4JB4JSpO7k2DxO V5ckh4Ftfrl4+RmZy2SzVWEqFvC/U1b7tjmCt7qIYEAU6yVfSEpjBRvfcTtbYrhOj0xo KVHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sW/5ZwfUkcrq8MNsWbZB8lRN7UAKxOnejMG35fACcpU=; b=PjK0iokVFlEKOkYm6yA2jmvMILVsYEGaoRRnfGUuTytl6Eg+y0GriAIbPAWwKb6bTp M9Sqh2NW+RT6ZRCeUEZkzzAHVtRK3q7usuaqni9Okw3He8kVSmG+cUUS7IoxS9P5pdlT Xp8nd1tJ2pKgI8658VLh03SV8tuZAB5jG2UoWFuRy3QeIw/z8DP8CqRe4t9263Xb481p rTgf0AMRtzpk9sQ717Pd4Ad14UWcJu+K6PoUXn2FP2fttmPlNNZRy4wDlo/V4Jx6ismm Bte8KDiBtiwThJibkvbBLLltyLYr1lhcH8d3Y+CcXMaTQCWaIzSn9LVaj+Ui1TGETnXs JFrg== X-Gm-Message-State: AOAM530Q73RengIN4BOBRgO/VAUJIsEr1oyocNZdRSDJdWI1tJSZfrUC jGmTwQSM95JEIK8Ib1Sow0yJCgRR+Sk= X-Google-Smtp-Source: ABdhPJypINRB1vch83S8A4T3nKixjdrHNtaJjymSWdrumgmpTeVVmG20s6/LdGZ7yzPGLe/h25kr3w== X-Received: by 2002:adf:d4c6:: with SMTP id w6mr3816250wrk.92.1589540360508; Fri, 15 May 2020 03:59:20 -0700 (PDT) Received: from localhost ([2a01:4b00:8523:2d03:d11b:f847:8002:7411]) by smtp.gmail.com with ESMTPSA id f128sm3300011wme.1.2020.05.15.03.59.19 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 15 May 2020 03:59:19 -0700 (PDT) From: David Brazdil To: Catalin Marinas , James Morse , Julien Thierry , Marc Zyngier , Suzuki K Poulose , Will Deacon Subject: [PATCH v2 14/14] arm64: kvm: Lift instrumentation restrictions on VHE Date: Fri, 15 May 2020 11:58:41 +0100 Message-Id: <20200515105841.73532-15-dbrazdil@google.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200515105841.73532-1-dbrazdil@google.com> References: <20200515105841.73532-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200515_035922_868482_056948B0 X-CRM114-Status: UNSURE ( 9.81 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -15.7 (---------------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-15.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:443 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_SPF_WL From: address is in the default SPF white-list -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.5 ENV_AND_HDR_SPF_MATCH Env and Hdr From used in default SPF WL Match -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: David Brazdil , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org With VHE and nVHE executable code completely separated, remove build config that disabled GCOV/KASAN/UBSAN/KCOV instrumentation for VHE as these now execute under the same memory mappings as the rest of the kernel. No violations are currently being reported by either KASAN or UBSAN. Signed-off-by: David Brazdil --- arch/arm64/kvm/hyp/Makefile | 8 -------- 1 file changed, 8 deletions(-) diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index c9fd8618980d..69113bf193de 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -11,11 +11,3 @@ obj-$(CONFIG_KVM_INDIRECT_VECTORS) += smccc_wa.o vhe-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ debug-sr.o entry.o switch.o fpsimd.o tlb.o hyp-entry.o - -# KVM code is run at a different exception code with a different map, so -# compiler instrumentation that inserts callbacks or checks into the code may -# cause crashes. Just disable it. -GCOV_PROFILE := n -KASAN_SANITIZE := n -UBSAN_SANITIZE := n -KCOV_INSTRUMENT := n