From patchwork Tue Nov 12 10:37:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872094 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3DF27D32D92 for ; Tue, 12 Nov 2024 11:04:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=LA9dViqKYqwFBgszUeA0NVjaCyBCvEgaySoNXwjG8Lo=; b=m19H+x/J9R1oXmMou9iKCWQYTN bY/6JYbvv9MPOAIGdQ4sNYyDUOHiyIzDvAxR62DdSbcsk8Nn0bXutcluTKtPJgt6WoBE+cRLx/0t5 S74Jf+Gy32TFaXc0k/D7nqIxP3R/LE7up+vMse6lo4UJNAukUn9VgCol9kCsDmmPsV76ABTQD0Ezu tcmBpk9w9kRmC8IUu8czfRVuRs2YJe4hX0JsgDfe6JsPykRMIeXs1cqYaaTiHQpeXSxYepsFpSVT6 F8uoipnPaZHmUusLcvT4XuDhHQqM0E2YD5/a2G9469PsFJpPeTP/U9iiS6bRKWOcrPP7LhM5TJEx2 6h1IV/uQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tAohI-000000039mv-4AUf; Tue, 12 Nov 2024 11:04:36 +0000 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoHA-000000034XF-1qdM for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:37:37 +0000 Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-4315baec69eso48343065e9.2 for ; Tue, 12 Nov 2024 02:37:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407855; x=1732012655; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LA9dViqKYqwFBgszUeA0NVjaCyBCvEgaySoNXwjG8Lo=; b=M2dkF7j5D4w3B3U0fmvKmvRVKa0ZvyJFfUwnjCejA0UV2QH89Y2Lrrtj6nUqAIUKaB uL+V3OL4ZErL6SRvWs5U208VYGOG+JplPMJA2roNMzKOcij3/ifqL1GxZa/EdvSBeAdo p5qkzZDkQl0ReeodWmEKqXj4DjzVH0MUFq2SH9n8PDh01h8JWOUgDExPvFrvTMwNtoyg ro+ZC9sOoyOMdIHgwY+aZmvMWTJARPaSoNUjHrwzb6sDLXtPfZCTwADsjb++D9F0uX2k neH6qq3Bi9B5Grsfh2ndYD4cZHsjSDabJSTnHc8y+ovoR3O9be7XULWe7eSlVpGj+P7i znvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407855; x=1732012655; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LA9dViqKYqwFBgszUeA0NVjaCyBCvEgaySoNXwjG8Lo=; b=w2pQLoNio4RGYyNkCPWR7vecIJ6Jm3R2fSrfEjZr2as0oqD4X55aaBEzB8BGNTDI6W 0hFwCzycKVgtrXfWWBWO5f5BXUlXraO6GzU8VhzPrqkg2OWyitWPi/yts/mJc3ym+DLk FJdnZ483JTqxj9tYzcTuU1eU9ZAGBgz/RAj33sEzQ/9WYTp9l/IT+ItTYpkVYVqvO9jl cVJk/r7EzZ1svLEqKbjmA7vbf/RKZSs9LGFouIBL8LBT2W7Bb0jeIMXClzY2cF8AUmO7 SVh2T3l9Fz8VWVVOC2FIzyDFJWUzDYSlTm1SXjSIQRuQ2tz599C+CczdGNIm2okL3QfI v3vQ== X-Forwarded-Encrypted: i=1; AJvYcCUXu0wjyBDey19uK+hLknjVdH/c16VjVVLDppU4D35bIPKTG4Idiw+e4UsjjGLtxmwXuvkNiDO/AYNnzcU2x6xj@lists.infradead.org X-Gm-Message-State: AOJu0YzY89ZTZALj7rxwAhqlt2/r7BKCB+sK+8E5eGZV19NqjspKiJFD bfwyhcVa672MzvvoK72MHVa16vRFYFDgMSk7NhM4x2vMfKI3WZS/+B1tjLbgI5w= X-Google-Smtp-Source: AGHT+IEJtm4ivV1htjbZWKRvdvQU4deS33CMMl/XnYQuXPgkmNTFagTge7vJGgMMOIhXkLqCtkpxGQ== X-Received: by 2002:a05:600c:1e03:b0:431:518a:683b with SMTP id 5b1f17b1804b1-432b7509b0emr146282075e9.18.1731407854903; Tue, 12 Nov 2024 02:37:34 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.37.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:37:34 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , Mark Brown , James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Anshuman Khandual , Fuad Tabba , James Morse , Shiqi Liu , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 01/12] arm64/sysreg: Add a comment that the sysreg file should be sorted Date: Tue, 12 Nov 2024 10:37:00 +0000 Message-Id: <20241112103717.589952-2-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023736_507572_12F66427 X-CRM114-Status: GOOD ( 13.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: James Clark There are a few entries particularly at the end of the file that aren't in order. To avoid confusion, add a comment that might help new entries to be added in the right place. Reviewed-by: Mark Brown Signed-off-by: James Clark Signed-off-by: James Clark --- arch/arm64/tools/sysreg | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index ed3bf6a0f5c1..a26c0da0c42d 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -48,6 +48,8 @@ # feature that introduces them (eg, FEAT_LS64_ACCDATA introduces enumeration # item ACCDATA) though it may be more taseful to do something else. +# Please try to keep entries in this file sorted by sysreg encoding. + Sysreg OSDTRRX_EL1 2 0 0 0 2 Res0 63:32 Field 31:0 DTRRX From patchwork Tue Nov 12 10:37:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872100 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2AF35D32D91 for ; Tue, 12 Nov 2024 11:10:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FTySTtAsqRGnf147MrhkPg4Yh5sLBUeHvDRhHluNO/0=; b=fRo9p172/jdbTsSBImdrnO3QPa dw50w4xAEOP3vfzCTKognkB8+GS184MpAUmkD3sHNP/f2iHiSsHVwhNWFcoZpOS3rLiFUy9f0CDua YiPQt8syl1pqpC8hge/3Tqn1wZh6XoifZ9sZ64e4+523UrgNBgC13E8F0bxSwZzH4hK60eTcuQ9r5 ZwxPI7SVQxjm0KmzMCRGhUjW8TXn3GgjuMnyJtpkkZuJKHbwGjZ6yHDHdEuE08CEjKlwCXOU9XGPu SNHKMCs8E2RewBq0sbQ1Vs9y0qn7js4Dq9PYmQojUU84D3D1DjKwJMO/BhD8bcN5/mm9c7unaGLm+ TVu2C5hA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tAoma-00000003AXk-2LNL; Tue, 12 Nov 2024 11:10:04 +0000 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoHJ-000000034aG-0sXa for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:37:47 +0000 Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-4314b316495so45507355e9.2 for ; Tue, 12 Nov 2024 02:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407863; x=1732012663; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FTySTtAsqRGnf147MrhkPg4Yh5sLBUeHvDRhHluNO/0=; b=UNMCyHPdFh+16xhqJnaarHuFUw0WnB785TUeHegb2UokfO5J/nwbp9j+yOAFoOwxSu nIVNR4uXCSfXBVyyHUpc4f2XEUrMg/p7y1a/Yh0uGi/irQgsoymv7VyxOzIH17HBhGoD bGMSnH5gBALJSQgQckGjvxhyG5miX4E+zO5vzNFFwLJJQ7GV7jtXM7HcE+hRaMNlLCz3 +ziW2vpjFxjRfvzfHR8KBY8ijHzLpHaXHHTu61xXz2ongoEq4EO4itwmF6djvu4V3C2v Vhjt9b6xfGLquFEk7m75jiZR0LIV8j7GCLuG0zqjSTaSmptlvov7Qu6TVJ36kE+pijEi 7hkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407863; x=1732012663; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FTySTtAsqRGnf147MrhkPg4Yh5sLBUeHvDRhHluNO/0=; b=bU6nj03NoZv643ckTQAKdj39lh7Z/Lpwg9nJxw78phSPAQfa0+MXsOUFm9U+aBHo4n YIcO0H45h0RYmlYBEiv3Hz4t9GaokOfW7BDw+sgqx/qo73oEhoNyWX5rPkjxlEWJnT8A 0ezcugAprfRdusJauvhO4uB9ofxJ7Cj8pY+mL8UzIxyxynNKsGDsc6yaFPYCiIsK57eo 8j+QblCZcBVb0tKXp8k/wz7oblBKKwzKNlvWnj46EW5NHWgBlr/wC7fdZeD6XEHwlBOs HKC3IbtuFrwPw1CUMyCnNQprwc15lgT6tInnE9uojtNJV6WaRQycRw7i58i4CmGW/h2L z9XA== X-Forwarded-Encrypted: i=1; AJvYcCUWLUcIl9S2+25j/GT87jmJ7G815+rzyBlLiE936vVyw5/yh5KvHPUV1tzXO/dhbZfZrrr2CQwIi/nKZHFQez9C@lists.infradead.org X-Gm-Message-State: AOJu0Yzp6vqMEOcp7LlPOzYhHzVdlxzvfLkeGqJOdE7fuej6OICNujNz 7RdflZGeXPuJDh9AclTeVqDCzrpPPwsVeFBRGGVsGkSXBzL9k7cPOTB0tgN6Lh8= X-Google-Smtp-Source: AGHT+IG3AoajTzlMTh78KdXie6xiKiy9y1tlxJvV6HKuAEEgWLas977mJIXVOTovDOUvWczXHedD1g== X-Received: by 2002:a05:600c:354c:b0:42c:acb0:ddbd with SMTP id 5b1f17b1804b1-432b74fd80fmr132067245e9.7.1731407863433; Tue, 12 Nov 2024 02:37:43 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.37.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:37:42 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , Mark Brown , James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Anshuman Khandual , Shiqi Liu , Fuad Tabba , James Morse , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 02/12] tools: arm64: Update sysreg.h header files Date: Tue, 12 Nov 2024 10:37:01 +0000 Message-Id: <20241112103717.589952-3-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023745_315921_CF48F24D X-CRM114-Status: GOOD ( 16.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: James Clark Created with the following: cp include/linux/kasan-tags.h tools/include/linux/ cp arch/arm64/include/asm/sysreg.h tools/arch/arm64/include/asm/ Update the tools copy of sysreg.h so that the next commit to add a new register doesn't have unrelated changes in it. Because the new version of sysreg.h includes kasan-tags.h, that file also now needs to be copied into tools. Acked-by: Mark Brown Reviewed-by: Suzuki K Poulose Signed-off-by: James Clark Signed-off-by: James Clark --- tools/arch/arm64/include/asm/sysreg.h | 398 +++++++++++++++++++++++++- tools/include/linux/kasan-tags.h | 15 + 2 files changed, 405 insertions(+), 8 deletions(-) create mode 100644 tools/include/linux/kasan-tags.h diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h index cd8420e8c3ad..345e81e0d2b3 100644 --- a/tools/arch/arm64/include/asm/sysreg.h +++ b/tools/arch/arm64/include/asm/sysreg.h @@ -11,6 +11,7 @@ #include #include +#include #include @@ -108,6 +109,9 @@ #define set_pstate_ssbs(x) asm volatile(SET_PSTATE_SSBS(x)) #define set_pstate_dit(x) asm volatile(SET_PSTATE_DIT(x)) +/* Register-based PAN access, for save/restore purposes */ +#define SYS_PSTATE_PAN sys_reg(3, 0, 4, 2, 3) + #define __SYS_BARRIER_INSN(CRm, op2, Rt) \ __emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f)) @@ -123,6 +127,37 @@ #define SYS_DC_CIGSW sys_insn(1, 0, 7, 14, 4) #define SYS_DC_CIGDSW sys_insn(1, 0, 7, 14, 6) +#define SYS_IC_IALLUIS sys_insn(1, 0, 7, 1, 0) +#define SYS_IC_IALLU sys_insn(1, 0, 7, 5, 0) +#define SYS_IC_IVAU sys_insn(1, 3, 7, 5, 1) + +#define SYS_DC_IVAC sys_insn(1, 0, 7, 6, 1) +#define SYS_DC_IGVAC sys_insn(1, 0, 7, 6, 3) +#define SYS_DC_IGDVAC sys_insn(1, 0, 7, 6, 5) + +#define SYS_DC_CVAC sys_insn(1, 3, 7, 10, 1) +#define SYS_DC_CGVAC sys_insn(1, 3, 7, 10, 3) +#define SYS_DC_CGDVAC sys_insn(1, 3, 7, 10, 5) + +#define SYS_DC_CVAU sys_insn(1, 3, 7, 11, 1) + +#define SYS_DC_CVAP sys_insn(1, 3, 7, 12, 1) +#define SYS_DC_CGVAP sys_insn(1, 3, 7, 12, 3) +#define SYS_DC_CGDVAP sys_insn(1, 3, 7, 12, 5) + +#define SYS_DC_CVADP sys_insn(1, 3, 7, 13, 1) +#define SYS_DC_CGVADP sys_insn(1, 3, 7, 13, 3) +#define SYS_DC_CGDVADP sys_insn(1, 3, 7, 13, 5) + +#define SYS_DC_CIVAC sys_insn(1, 3, 7, 14, 1) +#define SYS_DC_CIGVAC sys_insn(1, 3, 7, 14, 3) +#define SYS_DC_CIGDVAC sys_insn(1, 3, 7, 14, 5) + +/* Data cache zero operations */ +#define SYS_DC_ZVA sys_insn(1, 3, 7, 4, 1) +#define SYS_DC_GVA sys_insn(1, 3, 7, 4, 3) +#define SYS_DC_GZVA sys_insn(1, 3, 7, 4, 4) + /* * Automatically generated definitions for system registers, the * manual encodings below are in the process of being converted to @@ -162,6 +197,84 @@ #define SYS_DBGDTRTX_EL0 sys_reg(2, 3, 0, 5, 0) #define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0) +#define SYS_BRBINF_EL1(n) sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 0)) +#define SYS_BRBINFINJ_EL1 sys_reg(2, 1, 9, 1, 0) +#define SYS_BRBSRC_EL1(n) sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 1)) +#define SYS_BRBSRCINJ_EL1 sys_reg(2, 1, 9, 1, 1) +#define SYS_BRBTGT_EL1(n) sys_reg(2, 1, 8, (n & 15), (((n & 16) >> 2) | 2)) +#define SYS_BRBTGTINJ_EL1 sys_reg(2, 1, 9, 1, 2) +#define SYS_BRBTS_EL1 sys_reg(2, 1, 9, 0, 2) + +#define SYS_BRBCR_EL1 sys_reg(2, 1, 9, 0, 0) +#define SYS_BRBFCR_EL1 sys_reg(2, 1, 9, 0, 1) +#define SYS_BRBIDR0_EL1 sys_reg(2, 1, 9, 2, 0) + +#define SYS_TRCITECR_EL1 sys_reg(3, 0, 1, 2, 3) +#define SYS_TRCACATR(m) sys_reg(2, 1, 2, ((m & 7) << 1), (2 | (m >> 3))) +#define SYS_TRCACVR(m) sys_reg(2, 1, 2, ((m & 7) << 1), (0 | (m >> 3))) +#define SYS_TRCAUTHSTATUS sys_reg(2, 1, 7, 14, 6) +#define SYS_TRCAUXCTLR sys_reg(2, 1, 0, 6, 0) +#define SYS_TRCBBCTLR sys_reg(2, 1, 0, 15, 0) +#define SYS_TRCCCCTLR sys_reg(2, 1, 0, 14, 0) +#define SYS_TRCCIDCCTLR0 sys_reg(2, 1, 3, 0, 2) +#define SYS_TRCCIDCCTLR1 sys_reg(2, 1, 3, 1, 2) +#define SYS_TRCCIDCVR(m) sys_reg(2, 1, 3, ((m & 7) << 1), 0) +#define SYS_TRCCLAIMCLR sys_reg(2, 1, 7, 9, 6) +#define SYS_TRCCLAIMSET sys_reg(2, 1, 7, 8, 6) +#define SYS_TRCCNTCTLR(m) sys_reg(2, 1, 0, (4 | (m & 3)), 5) +#define SYS_TRCCNTRLDVR(m) sys_reg(2, 1, 0, (0 | (m & 3)), 5) +#define SYS_TRCCNTVR(m) sys_reg(2, 1, 0, (8 | (m & 3)), 5) +#define SYS_TRCCONFIGR sys_reg(2, 1, 0, 4, 0) +#define SYS_TRCDEVARCH sys_reg(2, 1, 7, 15, 6) +#define SYS_TRCDEVID sys_reg(2, 1, 7, 2, 7) +#define SYS_TRCEVENTCTL0R sys_reg(2, 1, 0, 8, 0) +#define SYS_TRCEVENTCTL1R sys_reg(2, 1, 0, 9, 0) +#define SYS_TRCEXTINSELR(m) sys_reg(2, 1, 0, (8 | (m & 3)), 4) +#define SYS_TRCIDR0 sys_reg(2, 1, 0, 8, 7) +#define SYS_TRCIDR10 sys_reg(2, 1, 0, 2, 6) +#define SYS_TRCIDR11 sys_reg(2, 1, 0, 3, 6) +#define SYS_TRCIDR12 sys_reg(2, 1, 0, 4, 6) +#define SYS_TRCIDR13 sys_reg(2, 1, 0, 5, 6) +#define SYS_TRCIDR1 sys_reg(2, 1, 0, 9, 7) +#define SYS_TRCIDR2 sys_reg(2, 1, 0, 10, 7) +#define SYS_TRCIDR3 sys_reg(2, 1, 0, 11, 7) +#define SYS_TRCIDR4 sys_reg(2, 1, 0, 12, 7) +#define SYS_TRCIDR5 sys_reg(2, 1, 0, 13, 7) +#define SYS_TRCIDR6 sys_reg(2, 1, 0, 14, 7) +#define SYS_TRCIDR7 sys_reg(2, 1, 0, 15, 7) +#define SYS_TRCIDR8 sys_reg(2, 1, 0, 0, 6) +#define SYS_TRCIDR9 sys_reg(2, 1, 0, 1, 6) +#define SYS_TRCIMSPEC(m) sys_reg(2, 1, 0, (m & 7), 7) +#define SYS_TRCITEEDCR sys_reg(2, 1, 0, 2, 1) +#define SYS_TRCOSLSR sys_reg(2, 1, 1, 1, 4) +#define SYS_TRCPRGCTLR sys_reg(2, 1, 0, 1, 0) +#define SYS_TRCQCTLR sys_reg(2, 1, 0, 1, 1) +#define SYS_TRCRSCTLR(m) sys_reg(2, 1, 1, (m & 15), (0 | (m >> 4))) +#define SYS_TRCRSR sys_reg(2, 1, 0, 10, 0) +#define SYS_TRCSEQEVR(m) sys_reg(2, 1, 0, (m & 3), 4) +#define SYS_TRCSEQRSTEVR sys_reg(2, 1, 0, 6, 4) +#define SYS_TRCSEQSTR sys_reg(2, 1, 0, 7, 4) +#define SYS_TRCSSCCR(m) sys_reg(2, 1, 1, (m & 7), 2) +#define SYS_TRCSSCSR(m) sys_reg(2, 1, 1, (8 | (m & 7)), 2) +#define SYS_TRCSSPCICR(m) sys_reg(2, 1, 1, (m & 7), 3) +#define SYS_TRCSTALLCTLR sys_reg(2, 1, 0, 11, 0) +#define SYS_TRCSTATR sys_reg(2, 1, 0, 3, 0) +#define SYS_TRCSYNCPR sys_reg(2, 1, 0, 13, 0) +#define SYS_TRCTRACEIDR sys_reg(2, 1, 0, 0, 1) +#define SYS_TRCTSCTLR sys_reg(2, 1, 0, 12, 0) +#define SYS_TRCVICTLR sys_reg(2, 1, 0, 0, 2) +#define SYS_TRCVIIECTLR sys_reg(2, 1, 0, 1, 2) +#define SYS_TRCVIPCSSCTLR sys_reg(2, 1, 0, 3, 2) +#define SYS_TRCVISSCTLR sys_reg(2, 1, 0, 2, 2) +#define SYS_TRCVMIDCCTLR0 sys_reg(2, 1, 3, 2, 2) +#define SYS_TRCVMIDCCTLR1 sys_reg(2, 1, 3, 3, 2) +#define SYS_TRCVMIDCVR(m) sys_reg(2, 1, 3, ((m & 7) << 1), 1) + +/* ETM */ +#define SYS_TRCOSLAR sys_reg(2, 1, 1, 0, 4) + +#define SYS_BRBCR_EL2 sys_reg(2, 4, 9, 0, 0) + #define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0) #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5) #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6) @@ -202,15 +315,38 @@ #define SYS_ERXCTLR_EL1 sys_reg(3, 0, 5, 4, 1) #define SYS_ERXSTATUS_EL1 sys_reg(3, 0, 5, 4, 2) #define SYS_ERXADDR_EL1 sys_reg(3, 0, 5, 4, 3) +#define SYS_ERXPFGF_EL1 sys_reg(3, 0, 5, 4, 4) +#define SYS_ERXPFGCTL_EL1 sys_reg(3, 0, 5, 4, 5) +#define SYS_ERXPFGCDN_EL1 sys_reg(3, 0, 5, 4, 6) #define SYS_ERXMISC0_EL1 sys_reg(3, 0, 5, 5, 0) #define SYS_ERXMISC1_EL1 sys_reg(3, 0, 5, 5, 1) +#define SYS_ERXMISC2_EL1 sys_reg(3, 0, 5, 5, 2) +#define SYS_ERXMISC3_EL1 sys_reg(3, 0, 5, 5, 3) #define SYS_TFSR_EL1 sys_reg(3, 0, 5, 6, 0) #define SYS_TFSRE0_EL1 sys_reg(3, 0, 5, 6, 1) #define SYS_PAR_EL1 sys_reg(3, 0, 7, 4, 0) #define SYS_PAR_EL1_F BIT(0) +/* When PAR_EL1.F == 1 */ #define SYS_PAR_EL1_FST GENMASK(6, 1) +#define SYS_PAR_EL1_PTW BIT(8) +#define SYS_PAR_EL1_S BIT(9) +#define SYS_PAR_EL1_AssuredOnly BIT(12) +#define SYS_PAR_EL1_TopLevel BIT(13) +#define SYS_PAR_EL1_Overlay BIT(14) +#define SYS_PAR_EL1_DirtyBit BIT(15) +#define SYS_PAR_EL1_F1_IMPDEF GENMASK_ULL(63, 48) +#define SYS_PAR_EL1_F1_RES0 (BIT(7) | BIT(10) | GENMASK_ULL(47, 16)) +#define SYS_PAR_EL1_RES1 BIT(11) +/* When PAR_EL1.F == 0 */ +#define SYS_PAR_EL1_SH GENMASK_ULL(8, 7) +#define SYS_PAR_EL1_NS BIT(9) +#define SYS_PAR_EL1_F0_IMPDEF BIT(10) +#define SYS_PAR_EL1_NSE BIT(11) +#define SYS_PAR_EL1_PA GENMASK_ULL(51, 12) +#define SYS_PAR_EL1_ATTR GENMASK_ULL(63, 56) +#define SYS_PAR_EL1_F0_RES0 (GENMASK_ULL(6, 1) | GENMASK_ULL(55, 52)) /*** Statistical Profiling Extension ***/ #define PMSEVFR_EL1_RES0_IMP \ @@ -274,6 +410,8 @@ #define SYS_ICC_IGRPEN0_EL1 sys_reg(3, 0, 12, 12, 6) #define SYS_ICC_IGRPEN1_EL1 sys_reg(3, 0, 12, 12, 7) +#define SYS_ACCDATA_EL1 sys_reg(3, 0, 13, 0, 5) + #define SYS_CNTKCTL_EL1 sys_reg(3, 0, 14, 1, 0) #define SYS_AIDR_EL1 sys_reg(3, 1, 0, 0, 7) @@ -286,7 +424,6 @@ #define SYS_PMCNTENCLR_EL0 sys_reg(3, 3, 9, 12, 2) #define SYS_PMOVSCLR_EL0 sys_reg(3, 3, 9, 12, 3) #define SYS_PMSWINC_EL0 sys_reg(3, 3, 9, 12, 4) -#define SYS_PMSELR_EL0 sys_reg(3, 3, 9, 12, 5) #define SYS_PMCEID0_EL0 sys_reg(3, 3, 9, 12, 6) #define SYS_PMCEID1_EL0 sys_reg(3, 3, 9, 12, 7) #define SYS_PMCCNTR_EL0 sys_reg(3, 3, 9, 13, 0) @@ -369,6 +506,7 @@ #define SYS_SCTLR_EL2 sys_reg(3, 4, 1, 0, 0) #define SYS_ACTLR_EL2 sys_reg(3, 4, 1, 0, 1) +#define SYS_SCTLR2_EL2 sys_reg(3, 4, 1, 0, 3) #define SYS_HCR_EL2 sys_reg(3, 4, 1, 1, 0) #define SYS_MDCR_EL2 sys_reg(3, 4, 1, 1, 1) #define SYS_CPTR_EL2 sys_reg(3, 4, 1, 1, 2) @@ -382,12 +520,15 @@ #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2) #define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1) -#define SYS_HDFGRTR_EL2 sys_reg(3, 4, 3, 1, 4) -#define SYS_HDFGWTR_EL2 sys_reg(3, 4, 3, 1, 5) +#define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0) #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6) #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0) #define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1) #define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0) +#define SYS_SPSR_irq sys_reg(3, 4, 4, 3, 0) +#define SYS_SPSR_abt sys_reg(3, 4, 4, 3, 1) +#define SYS_SPSR_und sys_reg(3, 4, 4, 3, 2) +#define SYS_SPSR_fiq sys_reg(3, 4, 4, 3, 3) #define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1) #define SYS_AFSR0_EL2 sys_reg(3, 4, 5, 1, 0) #define SYS_AFSR1_EL2 sys_reg(3, 4, 5, 1, 1) @@ -449,24 +590,49 @@ #define SYS_CONTEXTIDR_EL2 sys_reg(3, 4, 13, 0, 1) #define SYS_TPIDR_EL2 sys_reg(3, 4, 13, 0, 2) +#define SYS_SCXTNUM_EL2 sys_reg(3, 4, 13, 0, 7) + +#define __AMEV_op2(m) (m & 0x7) +#define __AMEV_CRm(n, m) (n | ((m & 0x8) >> 3)) +#define __SYS__AMEVCNTVOFF0n_EL2(m) sys_reg(3, 4, 13, __AMEV_CRm(0x8, m), __AMEV_op2(m)) +#define SYS_AMEVCNTVOFF0n_EL2(m) __SYS__AMEVCNTVOFF0n_EL2(m) +#define __SYS__AMEVCNTVOFF1n_EL2(m) sys_reg(3, 4, 13, __AMEV_CRm(0xA, m), __AMEV_op2(m)) +#define SYS_AMEVCNTVOFF1n_EL2(m) __SYS__AMEVCNTVOFF1n_EL2(m) #define SYS_CNTVOFF_EL2 sys_reg(3, 4, 14, 0, 3) #define SYS_CNTHCTL_EL2 sys_reg(3, 4, 14, 1, 0) +#define SYS_CNTHP_TVAL_EL2 sys_reg(3, 4, 14, 2, 0) +#define SYS_CNTHP_CTL_EL2 sys_reg(3, 4, 14, 2, 1) +#define SYS_CNTHP_CVAL_EL2 sys_reg(3, 4, 14, 2, 2) +#define SYS_CNTHV_TVAL_EL2 sys_reg(3, 4, 14, 3, 0) +#define SYS_CNTHV_CTL_EL2 sys_reg(3, 4, 14, 3, 1) +#define SYS_CNTHV_CVAL_EL2 sys_reg(3, 4, 14, 3, 2) /* VHE encodings for architectural EL0/1 system registers */ +#define SYS_BRBCR_EL12 sys_reg(2, 5, 9, 0, 0) #define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0) +#define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2) +#define SYS_SCTLR2_EL12 sys_reg(3, 5, 1, 0, 3) +#define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0) +#define SYS_TRFCR_EL12 sys_reg(3, 5, 1, 2, 1) +#define SYS_SMCR_EL12 sys_reg(3, 5, 1, 2, 6) #define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0) #define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1) #define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2) +#define SYS_TCR2_EL12 sys_reg(3, 5, 2, 0, 3) #define SYS_SPSR_EL12 sys_reg(3, 5, 4, 0, 0) #define SYS_ELR_EL12 sys_reg(3, 5, 4, 0, 1) #define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0) #define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1) #define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0) #define SYS_TFSR_EL12 sys_reg(3, 5, 5, 6, 0) +#define SYS_FAR_EL12 sys_reg(3, 5, 6, 0, 0) +#define SYS_PMSCR_EL12 sys_reg(3, 5, 9, 9, 0) #define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0) #define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0) #define SYS_VBAR_EL12 sys_reg(3, 5, 12, 0, 0) +#define SYS_CONTEXTIDR_EL12 sys_reg(3, 5, 13, 0, 1) +#define SYS_SCXTNUM_EL12 sys_reg(3, 5, 13, 0, 7) #define SYS_CNTKCTL_EL12 sys_reg(3, 5, 14, 1, 0) #define SYS_CNTP_TVAL_EL02 sys_reg(3, 5, 14, 2, 0) #define SYS_CNTP_CTL_EL02 sys_reg(3, 5, 14, 2, 1) @@ -477,6 +643,183 @@ #define SYS_SP_EL2 sys_reg(3, 6, 4, 1, 0) +/* AT instructions */ +#define AT_Op0 1 +#define AT_CRn 7 + +#define OP_AT_S1E1R sys_insn(AT_Op0, 0, AT_CRn, 8, 0) +#define OP_AT_S1E1W sys_insn(AT_Op0, 0, AT_CRn, 8, 1) +#define OP_AT_S1E0R sys_insn(AT_Op0, 0, AT_CRn, 8, 2) +#define OP_AT_S1E0W sys_insn(AT_Op0, 0, AT_CRn, 8, 3) +#define OP_AT_S1E1RP sys_insn(AT_Op0, 0, AT_CRn, 9, 0) +#define OP_AT_S1E1WP sys_insn(AT_Op0, 0, AT_CRn, 9, 1) +#define OP_AT_S1E1A sys_insn(AT_Op0, 0, AT_CRn, 9, 2) +#define OP_AT_S1E2R sys_insn(AT_Op0, 4, AT_CRn, 8, 0) +#define OP_AT_S1E2W sys_insn(AT_Op0, 4, AT_CRn, 8, 1) +#define OP_AT_S12E1R sys_insn(AT_Op0, 4, AT_CRn, 8, 4) +#define OP_AT_S12E1W sys_insn(AT_Op0, 4, AT_CRn, 8, 5) +#define OP_AT_S12E0R sys_insn(AT_Op0, 4, AT_CRn, 8, 6) +#define OP_AT_S12E0W sys_insn(AT_Op0, 4, AT_CRn, 8, 7) +#define OP_AT_S1E2A sys_insn(AT_Op0, 4, AT_CRn, 9, 2) + +/* TLBI instructions */ +#define TLBI_Op0 1 + +#define TLBI_Op1_EL1 0 /* Accessible from EL1 or higher */ +#define TLBI_Op1_EL2 4 /* Accessible from EL2 or higher */ + +#define TLBI_CRn_XS 8 /* Extra Slow (the common one) */ +#define TLBI_CRn_nXS 9 /* not Extra Slow (which nobody uses)*/ + +#define TLBI_CRm_IPAIS 0 /* S2 Inner-Shareable */ +#define TLBI_CRm_nROS 1 /* non-Range, Outer-Sharable */ +#define TLBI_CRm_RIS 2 /* Range, Inner-Sharable */ +#define TLBI_CRm_nRIS 3 /* non-Range, Inner-Sharable */ +#define TLBI_CRm_IPAONS 4 /* S2 Outer and Non-Shareable */ +#define TLBI_CRm_ROS 5 /* Range, Outer-Sharable */ +#define TLBI_CRm_RNS 6 /* Range, Non-Sharable */ +#define TLBI_CRm_nRNS 7 /* non-Range, Non-Sharable */ + +#define OP_TLBI_VMALLE1OS sys_insn(1, 0, 8, 1, 0) +#define OP_TLBI_VAE1OS sys_insn(1, 0, 8, 1, 1) +#define OP_TLBI_ASIDE1OS sys_insn(1, 0, 8, 1, 2) +#define OP_TLBI_VAAE1OS sys_insn(1, 0, 8, 1, 3) +#define OP_TLBI_VALE1OS sys_insn(1, 0, 8, 1, 5) +#define OP_TLBI_VAALE1OS sys_insn(1, 0, 8, 1, 7) +#define OP_TLBI_RVAE1IS sys_insn(1, 0, 8, 2, 1) +#define OP_TLBI_RVAAE1IS sys_insn(1, 0, 8, 2, 3) +#define OP_TLBI_RVALE1IS sys_insn(1, 0, 8, 2, 5) +#define OP_TLBI_RVAALE1IS sys_insn(1, 0, 8, 2, 7) +#define OP_TLBI_VMALLE1IS sys_insn(1, 0, 8, 3, 0) +#define OP_TLBI_VAE1IS sys_insn(1, 0, 8, 3, 1) +#define OP_TLBI_ASIDE1IS sys_insn(1, 0, 8, 3, 2) +#define OP_TLBI_VAAE1IS sys_insn(1, 0, 8, 3, 3) +#define OP_TLBI_VALE1IS sys_insn(1, 0, 8, 3, 5) +#define OP_TLBI_VAALE1IS sys_insn(1, 0, 8, 3, 7) +#define OP_TLBI_RVAE1OS sys_insn(1, 0, 8, 5, 1) +#define OP_TLBI_RVAAE1OS sys_insn(1, 0, 8, 5, 3) +#define OP_TLBI_RVALE1OS sys_insn(1, 0, 8, 5, 5) +#define OP_TLBI_RVAALE1OS sys_insn(1, 0, 8, 5, 7) +#define OP_TLBI_RVAE1 sys_insn(1, 0, 8, 6, 1) +#define OP_TLBI_RVAAE1 sys_insn(1, 0, 8, 6, 3) +#define OP_TLBI_RVALE1 sys_insn(1, 0, 8, 6, 5) +#define OP_TLBI_RVAALE1 sys_insn(1, 0, 8, 6, 7) +#define OP_TLBI_VMALLE1 sys_insn(1, 0, 8, 7, 0) +#define OP_TLBI_VAE1 sys_insn(1, 0, 8, 7, 1) +#define OP_TLBI_ASIDE1 sys_insn(1, 0, 8, 7, 2) +#define OP_TLBI_VAAE1 sys_insn(1, 0, 8, 7, 3) +#define OP_TLBI_VALE1 sys_insn(1, 0, 8, 7, 5) +#define OP_TLBI_VAALE1 sys_insn(1, 0, 8, 7, 7) +#define OP_TLBI_VMALLE1OSNXS sys_insn(1, 0, 9, 1, 0) +#define OP_TLBI_VAE1OSNXS sys_insn(1, 0, 9, 1, 1) +#define OP_TLBI_ASIDE1OSNXS sys_insn(1, 0, 9, 1, 2) +#define OP_TLBI_VAAE1OSNXS sys_insn(1, 0, 9, 1, 3) +#define OP_TLBI_VALE1OSNXS sys_insn(1, 0, 9, 1, 5) +#define OP_TLBI_VAALE1OSNXS sys_insn(1, 0, 9, 1, 7) +#define OP_TLBI_RVAE1ISNXS sys_insn(1, 0, 9, 2, 1) +#define OP_TLBI_RVAAE1ISNXS sys_insn(1, 0, 9, 2, 3) +#define OP_TLBI_RVALE1ISNXS sys_insn(1, 0, 9, 2, 5) +#define OP_TLBI_RVAALE1ISNXS sys_insn(1, 0, 9, 2, 7) +#define OP_TLBI_VMALLE1ISNXS sys_insn(1, 0, 9, 3, 0) +#define OP_TLBI_VAE1ISNXS sys_insn(1, 0, 9, 3, 1) +#define OP_TLBI_ASIDE1ISNXS sys_insn(1, 0, 9, 3, 2) +#define OP_TLBI_VAAE1ISNXS sys_insn(1, 0, 9, 3, 3) +#define OP_TLBI_VALE1ISNXS sys_insn(1, 0, 9, 3, 5) +#define OP_TLBI_VAALE1ISNXS sys_insn(1, 0, 9, 3, 7) +#define OP_TLBI_RVAE1OSNXS sys_insn(1, 0, 9, 5, 1) +#define OP_TLBI_RVAAE1OSNXS sys_insn(1, 0, 9, 5, 3) +#define OP_TLBI_RVALE1OSNXS sys_insn(1, 0, 9, 5, 5) +#define OP_TLBI_RVAALE1OSNXS sys_insn(1, 0, 9, 5, 7) +#define OP_TLBI_RVAE1NXS sys_insn(1, 0, 9, 6, 1) +#define OP_TLBI_RVAAE1NXS sys_insn(1, 0, 9, 6, 3) +#define OP_TLBI_RVALE1NXS sys_insn(1, 0, 9, 6, 5) +#define OP_TLBI_RVAALE1NXS sys_insn(1, 0, 9, 6, 7) +#define OP_TLBI_VMALLE1NXS sys_insn(1, 0, 9, 7, 0) +#define OP_TLBI_VAE1NXS sys_insn(1, 0, 9, 7, 1) +#define OP_TLBI_ASIDE1NXS sys_insn(1, 0, 9, 7, 2) +#define OP_TLBI_VAAE1NXS sys_insn(1, 0, 9, 7, 3) +#define OP_TLBI_VALE1NXS sys_insn(1, 0, 9, 7, 5) +#define OP_TLBI_VAALE1NXS sys_insn(1, 0, 9, 7, 7) +#define OP_TLBI_IPAS2E1IS sys_insn(1, 4, 8, 0, 1) +#define OP_TLBI_RIPAS2E1IS sys_insn(1, 4, 8, 0, 2) +#define OP_TLBI_IPAS2LE1IS sys_insn(1, 4, 8, 0, 5) +#define OP_TLBI_RIPAS2LE1IS sys_insn(1, 4, 8, 0, 6) +#define OP_TLBI_ALLE2OS sys_insn(1, 4, 8, 1, 0) +#define OP_TLBI_VAE2OS sys_insn(1, 4, 8, 1, 1) +#define OP_TLBI_ALLE1OS sys_insn(1, 4, 8, 1, 4) +#define OP_TLBI_VALE2OS sys_insn(1, 4, 8, 1, 5) +#define OP_TLBI_VMALLS12E1OS sys_insn(1, 4, 8, 1, 6) +#define OP_TLBI_RVAE2IS sys_insn(1, 4, 8, 2, 1) +#define OP_TLBI_RVALE2IS sys_insn(1, 4, 8, 2, 5) +#define OP_TLBI_ALLE2IS sys_insn(1, 4, 8, 3, 0) +#define OP_TLBI_VAE2IS sys_insn(1, 4, 8, 3, 1) +#define OP_TLBI_ALLE1IS sys_insn(1, 4, 8, 3, 4) +#define OP_TLBI_VALE2IS sys_insn(1, 4, 8, 3, 5) +#define OP_TLBI_VMALLS12E1IS sys_insn(1, 4, 8, 3, 6) +#define OP_TLBI_IPAS2E1OS sys_insn(1, 4, 8, 4, 0) +#define OP_TLBI_IPAS2E1 sys_insn(1, 4, 8, 4, 1) +#define OP_TLBI_RIPAS2E1 sys_insn(1, 4, 8, 4, 2) +#define OP_TLBI_RIPAS2E1OS sys_insn(1, 4, 8, 4, 3) +#define OP_TLBI_IPAS2LE1OS sys_insn(1, 4, 8, 4, 4) +#define OP_TLBI_IPAS2LE1 sys_insn(1, 4, 8, 4, 5) +#define OP_TLBI_RIPAS2LE1 sys_insn(1, 4, 8, 4, 6) +#define OP_TLBI_RIPAS2LE1OS sys_insn(1, 4, 8, 4, 7) +#define OP_TLBI_RVAE2OS sys_insn(1, 4, 8, 5, 1) +#define OP_TLBI_RVALE2OS sys_insn(1, 4, 8, 5, 5) +#define OP_TLBI_RVAE2 sys_insn(1, 4, 8, 6, 1) +#define OP_TLBI_RVALE2 sys_insn(1, 4, 8, 6, 5) +#define OP_TLBI_ALLE2 sys_insn(1, 4, 8, 7, 0) +#define OP_TLBI_VAE2 sys_insn(1, 4, 8, 7, 1) +#define OP_TLBI_ALLE1 sys_insn(1, 4, 8, 7, 4) +#define OP_TLBI_VALE2 sys_insn(1, 4, 8, 7, 5) +#define OP_TLBI_VMALLS12E1 sys_insn(1, 4, 8, 7, 6) +#define OP_TLBI_IPAS2E1ISNXS sys_insn(1, 4, 9, 0, 1) +#define OP_TLBI_RIPAS2E1ISNXS sys_insn(1, 4, 9, 0, 2) +#define OP_TLBI_IPAS2LE1ISNXS sys_insn(1, 4, 9, 0, 5) +#define OP_TLBI_RIPAS2LE1ISNXS sys_insn(1, 4, 9, 0, 6) +#define OP_TLBI_ALLE2OSNXS sys_insn(1, 4, 9, 1, 0) +#define OP_TLBI_VAE2OSNXS sys_insn(1, 4, 9, 1, 1) +#define OP_TLBI_ALLE1OSNXS sys_insn(1, 4, 9, 1, 4) +#define OP_TLBI_VALE2OSNXS sys_insn(1, 4, 9, 1, 5) +#define OP_TLBI_VMALLS12E1OSNXS sys_insn(1, 4, 9, 1, 6) +#define OP_TLBI_RVAE2ISNXS sys_insn(1, 4, 9, 2, 1) +#define OP_TLBI_RVALE2ISNXS sys_insn(1, 4, 9, 2, 5) +#define OP_TLBI_ALLE2ISNXS sys_insn(1, 4, 9, 3, 0) +#define OP_TLBI_VAE2ISNXS sys_insn(1, 4, 9, 3, 1) +#define OP_TLBI_ALLE1ISNXS sys_insn(1, 4, 9, 3, 4) +#define OP_TLBI_VALE2ISNXS sys_insn(1, 4, 9, 3, 5) +#define OP_TLBI_VMALLS12E1ISNXS sys_insn(1, 4, 9, 3, 6) +#define OP_TLBI_IPAS2E1OSNXS sys_insn(1, 4, 9, 4, 0) +#define OP_TLBI_IPAS2E1NXS sys_insn(1, 4, 9, 4, 1) +#define OP_TLBI_RIPAS2E1NXS sys_insn(1, 4, 9, 4, 2) +#define OP_TLBI_RIPAS2E1OSNXS sys_insn(1, 4, 9, 4, 3) +#define OP_TLBI_IPAS2LE1OSNXS sys_insn(1, 4, 9, 4, 4) +#define OP_TLBI_IPAS2LE1NXS sys_insn(1, 4, 9, 4, 5) +#define OP_TLBI_RIPAS2LE1NXS sys_insn(1, 4, 9, 4, 6) +#define OP_TLBI_RIPAS2LE1OSNXS sys_insn(1, 4, 9, 4, 7) +#define OP_TLBI_RVAE2OSNXS sys_insn(1, 4, 9, 5, 1) +#define OP_TLBI_RVALE2OSNXS sys_insn(1, 4, 9, 5, 5) +#define OP_TLBI_RVAE2NXS sys_insn(1, 4, 9, 6, 1) +#define OP_TLBI_RVALE2NXS sys_insn(1, 4, 9, 6, 5) +#define OP_TLBI_ALLE2NXS sys_insn(1, 4, 9, 7, 0) +#define OP_TLBI_VAE2NXS sys_insn(1, 4, 9, 7, 1) +#define OP_TLBI_ALLE1NXS sys_insn(1, 4, 9, 7, 4) +#define OP_TLBI_VALE2NXS sys_insn(1, 4, 9, 7, 5) +#define OP_TLBI_VMALLS12E1NXS sys_insn(1, 4, 9, 7, 6) + +/* Misc instructions */ +#define OP_GCSPUSHX sys_insn(1, 0, 7, 7, 4) +#define OP_GCSPOPCX sys_insn(1, 0, 7, 7, 5) +#define OP_GCSPOPX sys_insn(1, 0, 7, 7, 6) +#define OP_GCSPUSHM sys_insn(1, 3, 7, 7, 0) + +#define OP_BRB_IALL sys_insn(1, 1, 7, 2, 4) +#define OP_BRB_INJ sys_insn(1, 1, 7, 2, 5) +#define OP_CFP_RCTX sys_insn(1, 3, 7, 3, 4) +#define OP_DVP_RCTX sys_insn(1, 3, 7, 3, 5) +#define OP_COSP_RCTX sys_insn(1, 3, 7, 3, 6) +#define OP_CPP_RCTX sys_insn(1, 3, 7, 3, 7) + /* Common SCTLR_ELx flags. */ #define SCTLR_ELx_ENTP2 (BIT(60)) #define SCTLR_ELx_DSSBS (BIT(44)) @@ -555,16 +898,14 @@ /* Position the attr at the correct index */ #define MAIR_ATTRIDX(attr, idx) ((attr) << ((idx) * 8)) -/* id_aa64pfr0 */ -#define ID_AA64PFR0_EL1_ELx_64BIT_ONLY 0x1 -#define ID_AA64PFR0_EL1_ELx_32BIT_64BIT 0x2 - /* id_aa64mmfr0 */ #define ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN 0x0 +#define ID_AA64MMFR0_EL1_TGRAN4_LPA2 ID_AA64MMFR0_EL1_TGRAN4_52_BIT #define ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX 0x7 #define ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MIN 0x0 #define ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MAX 0x7 #define ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN 0x1 +#define ID_AA64MMFR0_EL1_TGRAN16_LPA2 ID_AA64MMFR0_EL1_TGRAN16_52_BIT #define ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX 0xf #define ARM64_MIN_PARANGE_BITS 32 @@ -572,6 +913,7 @@ #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_DEFAULT 0x0 #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_NONE 0x1 #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MIN 0x2 +#define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_LPA2 0x3 #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX 0x7 #ifdef CONFIG_ARM64_PA_BITS_52 @@ -582,11 +924,13 @@ #if defined(CONFIG_ARM64_4K_PAGES) #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN4_SHIFT +#define ID_AA64MMFR0_EL1_TGRAN_LPA2 ID_AA64MMFR0_EL1_TGRAN4_52_BIT #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN4_2_SHIFT #elif defined(CONFIG_ARM64_16K_PAGES) #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN16_SHIFT +#define ID_AA64MMFR0_EL1_TGRAN_LPA2 ID_AA64MMFR0_EL1_TGRAN16_52_BIT #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN16_2_SHIFT @@ -610,6 +954,19 @@ #define SYS_GCR_EL1_RRND (BIT(16)) #define SYS_GCR_EL1_EXCL_MASK 0xffffUL +#ifdef CONFIG_KASAN_HW_TAGS +/* + * KASAN always uses a whole byte for its tags. With CONFIG_KASAN_HW_TAGS it + * only uses tags in the range 0xF0-0xFF, which we map to MTE tags 0x0-0xF. + */ +#define __MTE_TAG_MIN (KASAN_TAG_MIN & 0xf) +#define __MTE_TAG_MAX (KASAN_TAG_MAX & 0xf) +#define __MTE_TAG_INCL GENMASK(__MTE_TAG_MAX, __MTE_TAG_MIN) +#define KERNEL_GCR_EL1_EXCL (SYS_GCR_EL1_EXCL_MASK & ~__MTE_TAG_INCL) +#else +#define KERNEL_GCR_EL1_EXCL SYS_GCR_EL1_EXCL_MASK +#endif + #define KERNEL_GCR_EL1 (SYS_GCR_EL1_RRND | KERNEL_GCR_EL1_EXCL) /* RGSR_EL1 Definitions */ @@ -716,6 +1073,22 @@ #define PIRx_ELx_PERM(idx, perm) ((perm) << ((idx) * 4)) +/* + * Permission Overlay Extension (POE) permission encodings. + */ +#define POE_NONE UL(0x0) +#define POE_R UL(0x1) +#define POE_X UL(0x2) +#define POE_RX UL(0x3) +#define POE_W UL(0x4) +#define POE_RW UL(0x5) +#define POE_XW UL(0x6) +#define POE_RXW UL(0x7) +#define POE_MASK UL(0xf) + +/* Initial value for Permission Overlay Extension for EL0 */ +#define POR_EL0_INIT POE_RXW + #define ARM64_FEATURE_FIELD_BITS 4 /* Defined for compatibility only, do not add new users. */ @@ -789,15 +1162,21 @@ /* * For registers without architectural names, or simply unsupported by * GAS. + * + * __check_r forces warnings to be generated by the compiler when + * evaluating r which wouldn't normally happen due to being passed to + * the assembler via __stringify(r). */ #define read_sysreg_s(r) ({ \ u64 __val; \ + u32 __maybe_unused __check_r = (u32)(r); \ asm volatile(__mrs_s("%0", r) : "=r" (__val)); \ __val; \ }) #define write_sysreg_s(v, r) do { \ u64 __val = (u64)(v); \ + u32 __maybe_unused __check_r = (u32)(r); \ asm volatile(__msr_s(r, "%x0") : : "rZ" (__val)); \ } while (0) @@ -827,6 +1206,8 @@ par; \ }) +#define SYS_FIELD_VALUE(reg, field, val) reg##_##field##_##val + #define SYS_FIELD_GET(reg, field, val) \ FIELD_GET(reg##_##field##_MASK, val) @@ -834,7 +1215,8 @@ FIELD_PREP(reg##_##field##_MASK, val) #define SYS_FIELD_PREP_ENUM(reg, field, val) \ - FIELD_PREP(reg##_##field##_MASK, reg##_##field##_##val) + FIELD_PREP(reg##_##field##_MASK, \ + SYS_FIELD_VALUE(reg, field, val)) #endif diff --git a/tools/include/linux/kasan-tags.h b/tools/include/linux/kasan-tags.h new file mode 100644 index 000000000000..4f85f562512c --- /dev/null +++ b/tools/include/linux/kasan-tags.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_KASAN_TAGS_H +#define _LINUX_KASAN_TAGS_H + +#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ +#define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ +#define KASAN_TAG_MAX 0xFD /* maximum value for random tags */ + +#ifdef CONFIG_KASAN_HW_TAGS +#define KASAN_TAG_MIN 0xF0 /* minimum value for random tags */ +#else +#define KASAN_TAG_MIN 0x00 /* minimum value for random tags */ +#endif + +#endif /* LINUX_KASAN_TAGS_H */ From patchwork Tue Nov 12 10:37:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 53F0CD32D91 for ; Tue, 12 Nov 2024 11:12:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=rUBQf8hA2fpU+kmR16EcISA5nfxhUZdSES5iWO7fp+4=; b=hi2tTLsjvR9iukZ+DqWpts5PoR +ZGAgYt+5wt46e5PJgf6TpH7bYicT2AvAqfBnIeX8zLtSzp67HdDnSMSPE08jVgu8b7FLNMF6mB23 1gCzCy8eKhAQlo7jwCUvFEvNEfgcVLWYIPjPHr/3LpQSY+FTws1lBKcdpC4OiPzdA3zGKzvyiEuXU 7fqXAh1nfzGNTv0VGEXT4uM+mfGwWiZTpLHxB1M6OfzLtjKsZ0fnd1eRMP8ud1eoGZtI38mpGn3TQ +kVZHWwdipuRkxeCibIml1kfJY/2Iz0TOtij1cXsuUl0Nqn1OGQldgcQNIJf0tUXoGGMdZ2FyYnAV Ri6aTAZQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tAooL-00000003ApM-11Cr; Tue, 12 Nov 2024 11:11:53 +0000 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoHS-000000034cX-2kov for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:37:56 +0000 Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-431ac30d379so47547465e9.1 for ; Tue, 12 Nov 2024 02:37:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407873; x=1732012673; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rUBQf8hA2fpU+kmR16EcISA5nfxhUZdSES5iWO7fp+4=; b=iOkdzVbvkCiezurxyjPi9OUWvN965moWV4r+MwkwEkf3VW3ZoTizHdNFwlGbA7Yxgt 9kIuTwCIaKpVjAGEUUupUox13ktMHbsiKPRelMR9bOSadnAwGdtBEnJqFPY33VthyoGk 7Q7E0PtwKYRNIwNc/7NO9Yn3ZFwDQgQ3OncuAPeUmDSPn7qrmqhvBf9h/pXYWs2dKyXl k1gfW+8QesbE9WpBaZ81PznYrET1zbT1KzeCTKH7JvQKHAh0UZRbGGZJ0AHQagZfAZAU P+vGUwlgBs0SsAvEYDTLOliyuyBscQPAEwJerGxBrH5x0/7XICHFzOacWA8Za5MWjs/2 KnxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407873; x=1732012673; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rUBQf8hA2fpU+kmR16EcISA5nfxhUZdSES5iWO7fp+4=; b=VMlrMIvWsuKROW5GFWT2BwEL8WRmC7ZSpPHyFoJK+AsNmsGrdaJnviAckX5bWKLH/q Vqpcyvj1Y1IaLgawNKqkqDIsk+Rr00AjGBiSJMa5YxJpR52i3SY/lLrhYdNDuwbIgEem ESVU2HxQDdyaooj+jaLv0kbIEySdJRzpMroCcnPI5x3y6QYxMw9fi+D1zi1Lk2JAkNt5 rMkE/Y3aRvPt1A8nOHdDf/NhlsALhOsAfqHUFxoiFEg+75hAnWWRTAyxeAqugNeT9kok 1UXfJFbnhgQeItQgPUhWvTsU5QRCvBH4mgsapw/l5jtPiLDdCMjfCshA+C1sleCeyUF8 0uPw== X-Forwarded-Encrypted: i=1; AJvYcCX8FiC9HvA9TtkkrspnYIlm24ySCk3k7qRdrDp4r4EqIk1cHOHGC8txlSRf73NMlo7nHM7LtZ2SSc6U1Z9USpRY@lists.infradead.org X-Gm-Message-State: AOJu0Yz+eP+/9KhjZsqrSPkrZFm6Z/Uswm5POJFlJDEiOh71vh/Qe6SS f64aZxBWYmD/pYWrWljBErO7cIp3sLJ8jufuetEhv2U08vJd46sMB+VLDzhwZgU= X-Google-Smtp-Source: AGHT+IGEJl+Jco8dYWeX7p5xX2ZqQ1FJqfj/h/+GHbI4ME8cH4NEto1wKbqbA+SxHv+BKuc1PuLyIg== X-Received: by 2002:a05:600c:4f04:b0:42e:d4a2:ce67 with SMTP id 5b1f17b1804b1-432b7505d19mr149731975e9.17.1731407873250; Tue, 12 Nov 2024 02:37:53 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.37.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:37:52 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , Mark Brown , James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Anshuman Khandual , "Rob Herring (Arm)" , James Morse , Shiqi Liu , Fuad Tabba , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 03/12] arm64/sysreg/tools: Move TRFCR definitions to sysreg Date: Tue, 12 Nov 2024 10:37:02 +0000 Message-Id: <20241112103717.589952-4-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023754_803354_A85FC054 X-CRM114-Status: GOOD ( 14.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: James Clark Convert TRFCR to automatic generation. Add separate definitions for ELx and EL2 as TRFCR_EL1 doesn't have CX. This also mirrors the previous definition so no code change is required. Also add TRFCR_EL12 which will start to be used in a later commit. Unfortunately, to avoid breaking the Perf build with duplicate definition errors, the tools copy of the sysreg.h header needs to be updated at the same time rather than the usual second commit. This is because the generated version of sysreg (arch/arm64/include/generated/asm/sysreg-defs.h), is currently shared and tools/ does not have its own copy. Reviewed-by: Mark Brown Signed-off-by: James Clark Signed-off-by: James Clark --- arch/arm64/include/asm/sysreg.h | 12 --------- arch/arm64/tools/sysreg | 36 +++++++++++++++++++++++++++ tools/arch/arm64/include/asm/sysreg.h | 12 --------- 3 files changed, 36 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 345e81e0d2b3..150416682e2c 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -283,8 +283,6 @@ #define SYS_RGSR_EL1 sys_reg(3, 0, 1, 0, 5) #define SYS_GCR_EL1 sys_reg(3, 0, 1, 0, 6) -#define SYS_TRFCR_EL1 sys_reg(3, 0, 1, 2, 1) - #define SYS_TCR_EL1 sys_reg(3, 0, 2, 0, 2) #define SYS_APIAKEYLO_EL1 sys_reg(3, 0, 2, 1, 0) @@ -519,7 +517,6 @@ #define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0) #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2) -#define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1) #define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0) #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6) #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0) @@ -983,15 +980,6 @@ /* Safe value for MPIDR_EL1: Bit31:RES1, Bit30:U:0, Bit24:MT:0 */ #define SYS_MPIDR_SAFE_VAL (BIT(31)) -#define TRFCR_ELx_TS_SHIFT 5 -#define TRFCR_ELx_TS_MASK ((0x3UL) << TRFCR_ELx_TS_SHIFT) -#define TRFCR_ELx_TS_VIRTUAL ((0x1UL) << TRFCR_ELx_TS_SHIFT) -#define TRFCR_ELx_TS_GUEST_PHYSICAL ((0x2UL) << TRFCR_ELx_TS_SHIFT) -#define TRFCR_ELx_TS_PHYSICAL ((0x3UL) << TRFCR_ELx_TS_SHIFT) -#define TRFCR_EL2_CX BIT(3) -#define TRFCR_ELx_ExTRE BIT(1) -#define TRFCR_ELx_E0TRE BIT(0) - /* GIC Hypervisor interface registers */ /* ICH_MISR_EL2 bit definitions */ #define ICH_MISR_EOI (1 << 0) diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index a26c0da0c42d..27a7afd5329a 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -1994,6 +1994,22 @@ Sysreg CPACR_EL1 3 0 1 0 2 Fields CPACR_ELx EndSysreg +SysregFields TRFCR_ELx +Res0 63:7 +UnsignedEnum 6:5 TS + 0b0001 VIRTUAL + 0b0010 GUEST_PHYSICAL + 0b0011 PHYSICAL +EndEnum +Res0 4:2 +Field 1 ExTRE +Field 0 E0TRE +EndSysregFields + +Sysreg TRFCR_EL1 3 0 1 2 1 +Fields TRFCR_ELx +EndSysreg + Sysreg SMPRI_EL1 3 0 1 2 4 Res0 63:4 Field 3:0 PRIORITY @@ -2536,6 +2552,22 @@ Field 1 ICIALLU Field 0 ICIALLUIS EndSysreg +Sysreg TRFCR_EL2 3 4 1 2 1 +Res0 63:7 +UnsignedEnum 6:5 TS + 0b0000 USE_TRFCR_EL1_TS + 0b0001 VIRTUAL + 0b0010 GUEST_PHYSICAL + 0b0011 PHYSICAL +EndEnum +Res0 4 +Field 3 CX +Res0 2 +Field 1 E2TRE +Field 0 E0HTRE +EndSysreg + + Sysreg HDFGRTR_EL2 3 4 3 1 4 Field 63 PMBIDR_EL1 Field 62 nPMSNEVFR_EL1 @@ -2946,6 +2978,10 @@ Sysreg ZCR_EL12 3 5 1 2 0 Fields ZCR_ELx EndSysreg +Sysreg TRFCR_EL12 3 5 1 2 1 +Fields TRFCR_ELx +EndSysreg + Sysreg SMCR_EL12 3 5 1 2 6 Fields SMCR_ELx EndSysreg diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h index 345e81e0d2b3..150416682e2c 100644 --- a/tools/arch/arm64/include/asm/sysreg.h +++ b/tools/arch/arm64/include/asm/sysreg.h @@ -283,8 +283,6 @@ #define SYS_RGSR_EL1 sys_reg(3, 0, 1, 0, 5) #define SYS_GCR_EL1 sys_reg(3, 0, 1, 0, 6) -#define SYS_TRFCR_EL1 sys_reg(3, 0, 1, 2, 1) - #define SYS_TCR_EL1 sys_reg(3, 0, 2, 0, 2) #define SYS_APIAKEYLO_EL1 sys_reg(3, 0, 2, 1, 0) @@ -519,7 +517,6 @@ #define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0) #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2) -#define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1) #define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0) #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6) #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0) @@ -983,15 +980,6 @@ /* Safe value for MPIDR_EL1: Bit31:RES1, Bit30:U:0, Bit24:MT:0 */ #define SYS_MPIDR_SAFE_VAL (BIT(31)) -#define TRFCR_ELx_TS_SHIFT 5 -#define TRFCR_ELx_TS_MASK ((0x3UL) << TRFCR_ELx_TS_SHIFT) -#define TRFCR_ELx_TS_VIRTUAL ((0x1UL) << TRFCR_ELx_TS_SHIFT) -#define TRFCR_ELx_TS_GUEST_PHYSICAL ((0x2UL) << TRFCR_ELx_TS_SHIFT) -#define TRFCR_ELx_TS_PHYSICAL ((0x3UL) << TRFCR_ELx_TS_SHIFT) -#define TRFCR_EL2_CX BIT(3) -#define TRFCR_ELx_ExTRE BIT(1) -#define TRFCR_ELx_E0TRE BIT(0) - /* GIC Hypervisor interface registers */ /* ICH_MISR_EL2 bit definitions */ #define ICH_MISR_EOI (1 << 0) From patchwork Tue Nov 12 10:37:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872114 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2505DD32D92 for ; Tue, 12 Nov 2024 11:17:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=oGQZgIecdgJFFuz0ar27zElDtqI8/ASy5Gql7KrbLX0=; b=UvFuA/6euRLwWqr0v1RW7IrEBT jMn/vGnjL+DPaCtAfLMOPmImOUph9VYNYvabOpPBmlKBlN8FuuDma0/3XHQuR/pcohr8lKIFMtme+ hS+roKjD+zbRUlbiTYC00Qoit/1DJu+xiTNAtT6rrtONPrjoSafNBRu2UhqJQjfMuDoJRDiEZ3BZf /dZDvP7sAn60cur83Dr9ElL0qoathqdbwMzrUaUt8xODXx1ZMJqHxOWMrS+cbJWsOz1MtFDpPE2Rg wkufR68aYM4eJf/msHyh7z6Ugf0CBYDSvFrp87c3680Ep3OxOYNu01AYuxfRq4af1AVgmt+ezfMcu byAsYJKA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tAotd-00000003BuY-1sTt; Tue, 12 Nov 2024 11:17:21 +0000 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoHb-000000034fg-1nEW for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:38:04 +0000 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-431481433bdso48154215e9.3 for ; Tue, 12 Nov 2024 02:38:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407882; x=1732012682; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oGQZgIecdgJFFuz0ar27zElDtqI8/ASy5Gql7KrbLX0=; b=MIroOh90IvIxcTDXxnUW/ubRZzooKGutpRt06bcvOZ2gB2wkNupmHPZXKDAB3wj54Q eevERbgy/QfNSROwOIjrEM/N05ikOzEYeXO7vJBb5CCB49rUfw6mitU1Eaq2Z4eueCc1 Nl+gDV1+rJcGTVTd494vfSfNvs30nY+7GGXgIKe1SIBY8sL8Db/gAAzoXC8Vh9W1rJQN /Z7SKcg5rt9p3CISmLwdghF3p8WaPZOfQ0pRhWq5qMvbs7n2wa+K2Y4AbupTPH4vdZ4H otRd3SOZhaEbJX/juNX9jBu+kPRz5+ueIvFP+vyHBtpXsdgXGeGKMPH5O5CZopdoS8MJ 2mxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407882; x=1732012682; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oGQZgIecdgJFFuz0ar27zElDtqI8/ASy5Gql7KrbLX0=; b=QUnTjYyKqEImlEvY5Uq3cz/qAldP9DuCRYPtjYoz6V67KV1puj9xbkqtW3GxD0S671 IlU72YuKnxhNYT+52oSnP2EYhoGRdlEbyZMOEXwKcWrnQk5RE9VPfDtkfrRenvQfgTdl /OhhvKP8Xk66jqeWM8ssVrN6vt9T09LFG7c6fSXUs+zEq3BkCz0biqVkb0qE5+vA+4w4 74Su/vxuruNYdfugWg0OeeXsu/tZbRsEsCSmvCrbRIgzhA0qRGdRlXyVUgJxOxzUFY6Q WU/adoWDl794AyDQqIkWGcW8ZB51Gwxj1O5Ii7F1mnJRi7NVxLxX80WOTjtFLUOLDTyB rDJQ== X-Forwarded-Encrypted: i=1; AJvYcCX8uv4n9PSahXucJcfrQX6hfF2KMA3qDkh2qYBaUjPubQKn2c0ftaxu6BT2x+VokRXXUFl5/kjFhUjgiQH4h78g@lists.infradead.org X-Gm-Message-State: AOJu0YwF7TfhN7S9BWnB6THxtQZtzJtx06uahD9kSpgCw4sIwpEyh4kU ibTTnklSWvWzGDvPitqP3ogUjf13LwT2xdKQgeI7a95dQI5PmchErB6BXCAZhTw= X-Google-Smtp-Source: AGHT+IHGEHyPFWqoNoYrbeGSGMnOBwTa4lfpM6N5kuAUTD0JGTXynuOGzkoJhiocD1Pug6afrml1XA== X-Received: by 2002:a05:600c:354c:b0:42c:acb0:dda5 with SMTP id 5b1f17b1804b1-432b74fdaeemr135858735e9.1.1731407881909; Tue, 12 Nov 2024 02:38:01 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.38.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:38:01 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Mark Brown , Anshuman Khandual , James Morse , Fuad Tabba , Shiqi Liu , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 04/12] KVM: arm64: Make vcpu flag macros more generic Date: Tue, 12 Nov 2024 10:37:03 +0000 Message-Id: <20241112103717.589952-5-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023803_504264_129CF402 X-CRM114-Status: GOOD ( 20.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Rename vcpu_* to kvm_* so that the same flags mechanism can be used in places other than vcpu without being confusing. Wherever macros are still related to vcpu like vcpu_get_flag() with hard coded v->arch, keep the vcpu_* name, otherwise change it. Also move the "v->arch" access one macro higher for the same reason. This will be used for moving flags to host_data in a later commit. Signed-off-by: James Clark --- arch/arm64/include/asm/kvm_host.h | 88 +++++++++++++++---------------- arch/arm64/kvm/hyp/exception.c | 12 ++--- arch/arm64/kvm/inject_fault.c | 4 +- arch/arm64/kvm/mmio.c | 10 ++-- 4 files changed, 57 insertions(+), 57 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f333b189fb43..34aa59f498c4 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -790,22 +790,22 @@ struct kvm_vcpu_arch { /* * Each 'flag' is composed of a comma-separated triplet: * - * - the flag-set it belongs to in the vcpu->arch structure + * - the flag-set it belongs to in the structure pointed to by 'v' * - the value for that flag * - the mask for that flag * - * __vcpu_single_flag() builds such a triplet for a single-bit flag. - * unpack_vcpu_flag() extract the flag value from the triplet for + * __kvm_single_flag() builds such a triplet for a single-bit flag. + * unpack_kvm_flag() extract the flag value from the triplet for * direct use outside of the flag accessors. */ -#define __vcpu_single_flag(_set, _f) _set, (_f), (_f) +#define __kvm_single_flag(_set, _f) _set, (_f), (_f) #define __unpack_flag(_set, _f, _m) _f -#define unpack_vcpu_flag(...) __unpack_flag(__VA_ARGS__) +#define unpack_kvm_flag(...) __unpack_flag(__VA_ARGS__) #define __build_check_flag(v, flagset, f, m) \ do { \ - typeof(v->arch.flagset) *_fset; \ + typeof(v.flagset) *_fset; \ \ /* Check that the flags fit in the mask */ \ BUILD_BUG_ON(HWEIGHT(m) != HWEIGHT((f) | (m))); \ @@ -813,11 +813,11 @@ struct kvm_vcpu_arch { BUILD_BUG_ON((sizeof(*_fset) * 8) <= __fls(m)); \ } while (0) -#define __vcpu_get_flag(v, flagset, f, m) \ +#define __kvm_get_flag(v, flagset, f, m) \ ({ \ __build_check_flag(v, flagset, f, m); \ \ - READ_ONCE(v->arch.flagset) & (m); \ + READ_ONCE(v.flagset) & (m); \ }) /* @@ -826,64 +826,64 @@ struct kvm_vcpu_arch { */ #ifdef __KVM_NVHE_HYPERVISOR__ /* the nVHE hypervisor is always non-preemptible */ -#define __vcpu_flags_preempt_disable() -#define __vcpu_flags_preempt_enable() +#define __kvm_flags_preempt_disable() +#define __kvm_flags_preempt_enable() #else -#define __vcpu_flags_preempt_disable() preempt_disable() -#define __vcpu_flags_preempt_enable() preempt_enable() +#define __kvm_flags_preempt_disable() preempt_disable() +#define __kvm_flags_preempt_enable() preempt_enable() #endif -#define __vcpu_set_flag(v, flagset, f, m) \ +#define __kvm_set_flag(v, flagset, f, m) \ do { \ - typeof(v->arch.flagset) *fset; \ + typeof(v.flagset) *fset; \ \ __build_check_flag(v, flagset, f, m); \ \ - fset = &v->arch.flagset; \ - __vcpu_flags_preempt_disable(); \ + fset = &v.flagset; \ + __kvm_flags_preempt_disable(); \ if (HWEIGHT(m) > 1) \ *fset &= ~(m); \ *fset |= (f); \ - __vcpu_flags_preempt_enable(); \ + __kvm_flags_preempt_enable(); \ } while (0) -#define __vcpu_clear_flag(v, flagset, f, m) \ +#define __kvm_clear_flag(v, flagset, f, m) \ do { \ - typeof(v->arch.flagset) *fset; \ + typeof(v.flagset) *fset; \ \ __build_check_flag(v, flagset, f, m); \ \ - fset = &v->arch.flagset; \ - __vcpu_flags_preempt_disable(); \ + fset = &v.flagset; \ + __kvm_flags_preempt_disable(); \ *fset &= ~(m); \ - __vcpu_flags_preempt_enable(); \ + __kvm_flags_preempt_enable(); \ } while (0) -#define vcpu_get_flag(v, ...) __vcpu_get_flag((v), __VA_ARGS__) -#define vcpu_set_flag(v, ...) __vcpu_set_flag((v), __VA_ARGS__) -#define vcpu_clear_flag(v, ...) __vcpu_clear_flag((v), __VA_ARGS__) +#define vcpu_get_flag(v, ...) __kvm_get_flag(((v)->arch), __VA_ARGS__) +#define vcpu_set_flag(v, ...) __kvm_set_flag(((v)->arch), __VA_ARGS__) +#define vcpu_clear_flag(v, ...) __kvm_clear_flag(((v)->arch), __VA_ARGS__) /* SVE exposed to guest */ -#define GUEST_HAS_SVE __vcpu_single_flag(cflags, BIT(0)) +#define GUEST_HAS_SVE __kvm_single_flag(cflags, BIT(0)) /* SVE config completed */ -#define VCPU_SVE_FINALIZED __vcpu_single_flag(cflags, BIT(1)) +#define VCPU_SVE_FINALIZED __kvm_single_flag(cflags, BIT(1)) /* PTRAUTH exposed to guest */ -#define GUEST_HAS_PTRAUTH __vcpu_single_flag(cflags, BIT(2)) +#define GUEST_HAS_PTRAUTH __kvm_single_flag(cflags, BIT(2)) /* KVM_ARM_VCPU_INIT completed */ -#define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(3)) +#define VCPU_INITIALIZED __kvm_single_flag(cflags, BIT(3)) /* Exception pending */ -#define PENDING_EXCEPTION __vcpu_single_flag(iflags, BIT(0)) +#define PENDING_EXCEPTION __kvm_single_flag(iflags, BIT(0)) /* * PC increment. Overlaps with EXCEPT_MASK on purpose so that it can't * be set together with an exception... */ -#define INCREMENT_PC __vcpu_single_flag(iflags, BIT(1)) +#define INCREMENT_PC __kvm_single_flag(iflags, BIT(1)) /* Target EL/MODE (not a single flag, but let's abuse the macro) */ -#define EXCEPT_MASK __vcpu_single_flag(iflags, GENMASK(3, 1)) +#define EXCEPT_MASK __kvm_single_flag(iflags, GENMASK(3, 1)) /* Helpers to encode exceptions with minimum fuss */ -#define __EXCEPT_MASK_VAL unpack_vcpu_flag(EXCEPT_MASK) +#define __EXCEPT_MASK_VAL unpack_kvm_flag(EXCEPT_MASK) #define __EXCEPT_SHIFT __builtin_ctzl(__EXCEPT_MASK_VAL) #define __vcpu_except_flags(_f) iflags, (_f << __EXCEPT_SHIFT), __EXCEPT_MASK_VAL @@ -907,28 +907,28 @@ struct kvm_vcpu_arch { #define EXCEPT_AA64_EL2_FIQ __vcpu_except_flags(6) #define EXCEPT_AA64_EL2_SERR __vcpu_except_flags(7) /* Guest debug is live */ -#define DEBUG_DIRTY __vcpu_single_flag(iflags, BIT(4)) +#define DEBUG_DIRTY __kvm_single_flag(iflags, BIT(4)) /* Save SPE context if active */ -#define DEBUG_STATE_SAVE_SPE __vcpu_single_flag(iflags, BIT(5)) +#define DEBUG_STATE_SAVE_SPE __kvm_single_flag(iflags, BIT(5)) /* Save TRBE context if active */ -#define DEBUG_STATE_SAVE_TRBE __vcpu_single_flag(iflags, BIT(6)) +#define DEBUG_STATE_SAVE_TRBE __kvm_single_flag(iflags, BIT(6)) /* SVE enabled for host EL0 */ -#define HOST_SVE_ENABLED __vcpu_single_flag(sflags, BIT(0)) +#define HOST_SVE_ENABLED __kvm_single_flag(sflags, BIT(0)) /* SME enabled for EL0 */ -#define HOST_SME_ENABLED __vcpu_single_flag(sflags, BIT(1)) +#define HOST_SME_ENABLED __kvm_single_flag(sflags, BIT(1)) /* Physical CPU not in supported_cpus */ -#define ON_UNSUPPORTED_CPU __vcpu_single_flag(sflags, BIT(2)) +#define ON_UNSUPPORTED_CPU __kvm_single_flag(sflags, BIT(2)) /* WFIT instruction trapped */ -#define IN_WFIT __vcpu_single_flag(sflags, BIT(3)) +#define IN_WFIT __kvm_single_flag(sflags, BIT(3)) /* vcpu system registers loaded on physical CPU */ -#define SYSREGS_ON_CPU __vcpu_single_flag(sflags, BIT(4)) +#define SYSREGS_ON_CPU __kvm_single_flag(sflags, BIT(4)) /* Software step state is Active-pending */ -#define DBG_SS_ACTIVE_PENDING __vcpu_single_flag(sflags, BIT(5)) +#define DBG_SS_ACTIVE_PENDING __kvm_single_flag(sflags, BIT(5)) /* PMUSERENR for the guest EL0 is on physical CPU */ -#define PMUSERENR_ON_CPU __vcpu_single_flag(sflags, BIT(6)) +#define PMUSERENR_ON_CPU __kvm_single_flag(sflags, BIT(6)) /* WFI instruction trapped */ -#define IN_WFI __vcpu_single_flag(sflags, BIT(7)) +#define IN_WFI __kvm_single_flag(sflags, BIT(7)) /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c index 424a5107cddb..6bb61e933644 100644 --- a/arch/arm64/kvm/hyp/exception.c +++ b/arch/arm64/kvm/hyp/exception.c @@ -320,13 +320,13 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) { if (vcpu_el1_is_32bit(vcpu)) { switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) { - case unpack_vcpu_flag(EXCEPT_AA32_UND): + case unpack_kvm_flag(EXCEPT_AA32_UND): enter_exception32(vcpu, PSR_AA32_MODE_UND, 4); break; - case unpack_vcpu_flag(EXCEPT_AA32_IABT): + case unpack_kvm_flag(EXCEPT_AA32_IABT): enter_exception32(vcpu, PSR_AA32_MODE_ABT, 12); break; - case unpack_vcpu_flag(EXCEPT_AA32_DABT): + case unpack_kvm_flag(EXCEPT_AA32_DABT): enter_exception32(vcpu, PSR_AA32_MODE_ABT, 16); break; default: @@ -335,15 +335,15 @@ static void kvm_inject_exception(struct kvm_vcpu *vcpu) } } else { switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) { - case unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC): + case unpack_kvm_flag(EXCEPT_AA64_EL1_SYNC): enter_exception64(vcpu, PSR_MODE_EL1h, except_type_sync); break; - case unpack_vcpu_flag(EXCEPT_AA64_EL2_SYNC): + case unpack_kvm_flag(EXCEPT_AA64_EL2_SYNC): enter_exception64(vcpu, PSR_MODE_EL2h, except_type_sync); break; - case unpack_vcpu_flag(EXCEPT_AA64_EL2_IRQ): + case unpack_kvm_flag(EXCEPT_AA64_EL2_IRQ): enter_exception64(vcpu, PSR_MODE_EL2h, except_type_irq); break; diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index a640e839848e..a7a2540cc507 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -83,7 +83,7 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr esr |= ESR_ELx_FSC_EXTABT; - if (match_target_el(vcpu, unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC))) { + if (match_target_el(vcpu, unpack_kvm_flag(EXCEPT_AA64_EL1_SYNC))) { vcpu_write_sys_reg(vcpu, addr, FAR_EL1); vcpu_write_sys_reg(vcpu, esr, ESR_EL1); } else { @@ -105,7 +105,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu) if (kvm_vcpu_trap_il_is32bit(vcpu)) esr |= ESR_ELx_IL; - if (match_target_el(vcpu, unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC))) + if (match_target_el(vcpu, unpack_kvm_flag(EXCEPT_AA64_EL1_SYNC))) vcpu_write_sys_reg(vcpu, esr, ESR_EL1); else vcpu_write_sys_reg(vcpu, esr, ESR_EL2); diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c index ab365e839874..1728e37739fe 100644 --- a/arch/arm64/kvm/mmio.c +++ b/arch/arm64/kvm/mmio.c @@ -79,17 +79,17 @@ static bool kvm_pending_sync_exception(struct kvm_vcpu *vcpu) if (vcpu_el1_is_32bit(vcpu)) { switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) { - case unpack_vcpu_flag(EXCEPT_AA32_UND): - case unpack_vcpu_flag(EXCEPT_AA32_IABT): - case unpack_vcpu_flag(EXCEPT_AA32_DABT): + case unpack_kvm_flag(EXCEPT_AA32_UND): + case unpack_kvm_flag(EXCEPT_AA32_IABT): + case unpack_kvm_flag(EXCEPT_AA32_DABT): return true; default: return false; } } else { switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) { - case unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC): - case unpack_vcpu_flag(EXCEPT_AA64_EL2_SYNC): + case unpack_kvm_flag(EXCEPT_AA64_EL1_SYNC): + case unpack_kvm_flag(EXCEPT_AA64_EL2_SYNC): return true; default: return false; From patchwork Tue Nov 12 10:37:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872117 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 75F17D32D91 for ; Tue, 12 Nov 2024 11:21:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=867c+Xk81TQQxI9AorJDkxlnQKkh21LK8NW2S3F1gcs=; b=mLiy2N3su4izacpLYKW4HaTBfk K0hfsqhfr9et9aSyt46Nhx5sCXcRUTV78ARkRIwDfyKX1N85z3ObnNvTjFNMBi1Z35u+udRkluhKj xIE+2fE+H1BhbDtbuFXPmmiT9pT3D4lWfA0OIFj2eVJT11VlEi2WezASEIjea+JqL/d7mtJfouw5X JG3km89wtzfUzcMfhs5QeFOVNHtNuz9e3mA5TmbHt1Hl8dX4Myhtk3wuOp7rxqGzEmpoGma+nG7Kp i98tI5Q0/PG0S0mCL6rairnsqwIC/pSIsS22Wi29dJZJ8I/sohAshyJ7bJpZmukoQXt/bm7lCUVBb TJN0bY0w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tAox9-00000003CZ3-2I3i; Tue, 12 Nov 2024 11:20:59 +0000 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoHj-000000034jI-2it2 for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:38:13 +0000 Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-43161e7bb25so44487215e9.2 for ; Tue, 12 Nov 2024 02:38:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407890; x=1732012690; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=867c+Xk81TQQxI9AorJDkxlnQKkh21LK8NW2S3F1gcs=; b=wMP6AXs2KAzdsyXmd2jypqjr+PGiVIxGtty00lhz35NyTdjZnJtbVVIN1OPJQT5mGs Q0gUwsXRZ0SpqnAVaDAjrH3Vldkn5EzHk/LpVVq+6j/MDlQ8vLFTL5UJCs06+j8CPf6W irLuGl2nuCx889TmUbDviO73/LeGAW5JxkbbaE1HzsoUM6iPUnQzlmmMbuJh0pjfSSTR IsubT+ky1OVBKVX63c0H0uvlI/WS6v+cjiatYZJQbQjF1CX2GES/y4k+c0mEtjImCGar fsYBXVXA4p5o8CFbgCgSkOa0JKwNf4Da3hAlIOyD/ycbHXzZBYzP2hDawHVnxqCBxWzQ srZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407890; x=1732012690; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=867c+Xk81TQQxI9AorJDkxlnQKkh21LK8NW2S3F1gcs=; b=hPfOmCcGzi0rJStxmrZ2aJcIut0+lU1/9UlDbLVAEsX/0FREJFXMOwGlRP+w+QmoPC AYJJ3RUZCxO6wPxC4Q9aRETwbRzoLtyNbFR/pQ7O6x6pZPAfE68xQtD/0G7c9lHZScwq inzdidB81ngfYFkJRivnVOddPJhRrGzf/LCvQ+ppiVrCof5pyhY2pnLjLMfcsZ7zUp1J pxm/cuG589C5+Lr0SSL8Z4Zx9XCykmLJUUsFd9ChECsqDYi4Q6piD3pEwzdhexz2caHx tWQQ8nQu2RJ3ectH7SrFOxqLAOgzVw1Q6q0CqWLPcBn+/Rr5CzeTG0gvw50qookpWrLX IhHQ== X-Forwarded-Encrypted: i=1; AJvYcCViXiEJ9rI11BG1CCMvVnqFzyZT6zDqqdaBAhsDaYoRoCF52SWuy7DMXMbFMNB6eB8p82Ma2KPj0+4KtFX0zQai@lists.infradead.org X-Gm-Message-State: AOJu0Yzeh+hKIiaM9IYT+xGsmdqKfwJdmGfS1rm43ao36XJ5e0wnB1Jg 8mOgUtOajfDsVJ345GdcNEVhAiRp08EbWGsvKAXT2ZMLjwqD23A0neOfkX9r6+0= X-Google-Smtp-Source: AGHT+IE0xLwD1/ZV0XtAmGGdV4VSBlbGR4egnGHLGmnWqBzEDrOWmm09rA7sjHH5TO65egKKJadivA== X-Received: by 2002:a05:600c:1910:b0:431:680e:95ff with SMTP id 5b1f17b1804b1-432cce723d3mr17694385e9.9.1731407890092; Tue, 12 Nov 2024 02:38:10 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.38.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:38:09 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Mark Brown , Anshuman Khandual , Shiqi Liu , James Morse , Fuad Tabba , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 05/12] KVM: arm64: Move SPE and TRBE flags to host data Date: Tue, 12 Nov 2024 10:37:04 +0000 Message-Id: <20241112103717.589952-6-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023811_807980_603B22EC X-CRM114-Status: GOOD ( 25.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org They don't change, are per-CPU and don't need to be on the vcpu, so initialize them one time only. Another benefit is this is done before the host is deprivileged so can be trusted by pKVM. Rename SAVE to HAS which is slightly more accurate because saving only happens when it exists _and_ is enabled. Signed-off-by: James Clark --- arch/arm64/include/asm/kvm_host.h | 21 +++++++----- arch/arm64/kvm/arm.c | 3 -- arch/arm64/kvm/debug.c | 52 +++++++++++------------------- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 8 ++--- 4 files changed, 36 insertions(+), 48 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 34aa59f498c4..61ff34e1ffef 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -638,6 +638,11 @@ struct kvm_host_data { FP_STATE_GUEST_OWNED, } fp_owner; + struct { + /* Host CPU features, set at init */ + u8 feats; + } flags; + /* * host_debug_state contains the host registers which are * saved and restored during world switches. @@ -908,10 +913,6 @@ struct kvm_vcpu_arch { #define EXCEPT_AA64_EL2_SERR __vcpu_except_flags(7) /* Guest debug is live */ #define DEBUG_DIRTY __kvm_single_flag(iflags, BIT(4)) -/* Save SPE context if active */ -#define DEBUG_STATE_SAVE_SPE __kvm_single_flag(iflags, BIT(5)) -/* Save TRBE context if active */ -#define DEBUG_STATE_SAVE_TRBE __kvm_single_flag(iflags, BIT(6)) /* SVE enabled for host EL0 */ #define HOST_SVE_ENABLED __kvm_single_flag(sflags, BIT(0)) @@ -930,6 +931,14 @@ struct kvm_vcpu_arch { /* WFI instruction trapped */ #define IN_WFI __kvm_single_flag(sflags, BIT(7)) +#define host_data_get_flag(...) __kvm_get_flag((*host_data_ptr(flags)), __VA_ARGS__) +#define host_data_set_flag(...) __kvm_set_flag((*host_data_ptr(flags)), __VA_ARGS__) +#define host_data_clear_flag(...) __kvm_clear_flag((*host_data_ptr(flags)), __VA_ARGS__) + +/* Save SPE context if active */ +#define HOST_FEAT_HAS_SPE __kvm_single_flag(feats, BIT(0)) +/* Save TRBE context if active */ +#define HOST_FEAT_HAS_TRBE __kvm_single_flag(feats, BIT(1)) /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ @@ -1367,10 +1376,6 @@ static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) return (!has_vhe() && attr->exclude_host); } -/* Flags for host debug state */ -void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu); -void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu); - #ifdef CONFIG_KVM void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); void kvm_clr_pmu_events(u64 clr); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..2a54baca3144 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -617,15 +617,12 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) vcpu_set_pauth_traps(vcpu); - kvm_arch_vcpu_load_debug_state_flags(vcpu); - if (!cpumask_test_cpu(cpu, vcpu->kvm->arch.supported_cpus)) vcpu_set_on_unsupported_cpu(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { - kvm_arch_vcpu_put_debug_state_flags(vcpu); kvm_arch_vcpu_put_fp(vcpu); if (has_vhe()) kvm_vcpu_put_vhe(vcpu); diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index ce8886122ed3..cf5558806687 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -68,16 +68,31 @@ static void restore_guest_debug_regs(struct kvm_vcpu *vcpu) /** * kvm_arm_init_debug - grab what we need for debug * - * Currently the sole task of this function is to retrieve the initial - * value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which has - * presumably been set-up by some knowledgeable bootcode. - * * It is called once per-cpu during CPU hyp initialisation. */ void kvm_arm_init_debug(void) { + u64 dfr0 = read_sysreg(id_aa64dfr0_el1); + + /* + * Retrieve the initial value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which + * has presumably been set-up by some knowledgeable bootcode. + */ __this_cpu_write(mdcr_el2, kvm_call_hyp_ret(__kvm_get_mdcr_el2)); + + /* + * If SPE is present on this CPU and is available at current EL, + * we may need to check if the host state needs to be saved. + */ + if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) && + !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(PMBIDR_EL1_P_SHIFT))) + host_data_set_flag(HOST_FEAT_HAS_SPE); + + /* Check if we have TRBE implemented and available at the host */ + if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceBuffer_SHIFT) && + !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_EL1_P)) + host_data_set_flag(HOST_FEAT_HAS_TRBE); } /** @@ -314,32 +329,3 @@ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) } } } - -void kvm_arch_vcpu_load_debug_state_flags(struct kvm_vcpu *vcpu) -{ - u64 dfr0; - - /* For VHE, there is nothing to do */ - if (has_vhe()) - return; - - dfr0 = read_sysreg(id_aa64dfr0_el1); - /* - * If SPE is present on this CPU and is available at current EL, - * we may need to check if the host state needs to be saved. - */ - if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) && - !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(PMBIDR_EL1_P_SHIFT))) - vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_SPE); - - /* Check if we have TRBE implemented and available at the host */ - if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceBuffer_SHIFT) && - !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_EL1_P)) - vcpu_set_flag(vcpu, DEBUG_STATE_SAVE_TRBE); -} - -void kvm_arch_vcpu_put_debug_state_flags(struct kvm_vcpu *vcpu) -{ - vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_SPE); - vcpu_clear_flag(vcpu, DEBUG_STATE_SAVE_TRBE); -} diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 53efda0235cf..89f44a51a172 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -82,10 +82,10 @@ static void __debug_restore_trace(u64 trfcr_el1) void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu) { /* Disable and flush SPE data generation */ - if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE)) + if (host_data_get_flag(HOST_FEAT_HAS_SPE)) __debug_save_spe(host_data_ptr(host_debug_state.pmscr_el1)); /* Disable and flush Self-Hosted Trace generation */ - if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE)) + if (host_data_get_flag(HOST_FEAT_HAS_TRBE)) __debug_save_trace(host_data_ptr(host_debug_state.trfcr_el1)); } @@ -96,9 +96,9 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu) void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu) { - if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_SPE)) + if (host_data_get_flag(HOST_FEAT_HAS_SPE)) __debug_restore_spe(*host_data_ptr(host_debug_state.pmscr_el1)); - if (vcpu_get_flag(vcpu, DEBUG_STATE_SAVE_TRBE)) + if (host_data_get_flag(HOST_FEAT_HAS_TRBE)) __debug_restore_trace(*host_data_ptr(host_debug_state.trfcr_el1)); } From patchwork Tue Nov 12 10:37:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872118 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CF090D32D95 for ; Tue, 12 Nov 2024 11:26:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UdrH76534AdnOkkz/8MUO16ELoaDJWuaGxP7kufZ9lk=; b=C4WiF1mQb/hixhCAQidrg5ibML x89UXU+42k+qa9xwlZTCmUdwPB09DVFC6WGXBLVLXfdj7p2qmbsDnlmYp7GMrw8wtTgEjm0+gU/oQ mIXos8zYDbfmEs/SUDhqQIpMsuLVAHcY5GMqC96bsaSx5onpblzRe9PClG5kBMB8G8ejgUXNSzF1L 1vVWoCBc+MUYbZiycALaEJeMYcbZ7qFbKUJ+jIrTASfQc5KuiXpvtLUiAjAYf8BMjPeNZ5QQmf95t v9QXxLmQ70bwlfF8JjuPoRC0fFBkT4k11JOqYokcu+BYD4wIhbtbycr8pYYq0JqJYYzOW2QauChVR NJ5ycnYw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tAp2T-00000003Dci-2saV; Tue, 12 Nov 2024 11:26:29 +0000 Received: from mail-wm1-x335.google.com ([2a00:1450:4864:20::335]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoHr-000000034lp-32PD for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:38:20 +0000 Received: by mail-wm1-x335.google.com with SMTP id 5b1f17b1804b1-4315baa51d8so48384435e9.0 for ; Tue, 12 Nov 2024 02:38:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407898; x=1732012698; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UdrH76534AdnOkkz/8MUO16ELoaDJWuaGxP7kufZ9lk=; b=i08cHEFQ3MYGws02quiRNzjZc7gU+wYyUI8iAzZGt8XYfwHRZvN9k0WEB6N3vU/Rbl Kvza1yKAVcVCnE3rmt4o3rkPvw2rX99WC7qZMtIAscTX0AlAs4cCrYsrM2kjJ5JNHq4H X4EgP5Igb5lmyftEkyrBlH/UWSVQs5voabNIJ7xSg+iLjFBS1TgRrHTdxKAX3631+uqa L6c8/3Xf801qbsQ5oDnRcpR+pWW3dpkE+oFb1ySw/lbbDK5fhaKFdW5o0bpN+8fbCI9q nvyiRKH0+PPbvr8EOyvLJOPJKP60aYusnzmCLnyXIKKBM1L1hrwrK3JIBsoT6SJenGid TckA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407898; x=1732012698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UdrH76534AdnOkkz/8MUO16ELoaDJWuaGxP7kufZ9lk=; b=hUj84J2xlhSkoc125NNdRKrGrAZuA/lIQvA5ATzupYD2n6BdhG6IIizQxFzVA6q9AM BthCDSOADs6MAOV7vKilrAIUYACZy6BnnDL7z6suv9fegGAaCd0ofSvlArPahmwtimyj 1wgpzdgVc2RdVN6xl1mOdj2rMD+PaLozT0897SawF/Kj6+A8VyHf/mlHm/XYt5+9rGFR dkj31vlOmHkwuAtaaJHAKpyTxA5gvk4bZkX9+uu6fq66miCl7O5Uy/IQ7uKi7cKRLS/C uJWqpf859u3A9WBkkP1gskRdzfdyWn0IKPDjjLOkRZ7WfWcSDUVm0Ev/t5Rthfg0T1UJ KpqQ== X-Forwarded-Encrypted: i=1; AJvYcCVfvW2luCdOZduMXJ86bwjpdypxc0sCfUi4yeYB47+ftRaJXx9LEFNsxVJcHXduQ8Z0BVGm8FWM/6n+jxCjYvT0@lists.infradead.org X-Gm-Message-State: AOJu0YzHbFPI3jB9dKRqQug/EYy3sq/Z4ohNRY2Gsv849idt1KEfai16 Zsx1PO1hWDlOTWMv+GDTpHxr9S861WFWXe+er6ZaHgX5RuANpq6DNf7hO+ZIAbI= X-Google-Smtp-Source: AGHT+IF1FS9ufu9o4Y4stQLYGJvoNNDm9mRT+SEMwJRqakkQKPd/4nKseuKIJE+lWn0kSfJLNtfHQw== X-Received: by 2002:a05:600c:3b22:b0:431:52da:9d67 with SMTP id 5b1f17b1804b1-432b74fdaf9mr140208015e9.3.1731407898314; Tue, 12 Nov 2024 02:38:18 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.38.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:38:17 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Anshuman Khandual , "Rob Herring (Arm)" , James Morse , Shiqi Liu , Fuad Tabba , Mark Brown , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 06/12] KVM: arm64: Add flag for FEAT_TRF Date: Tue, 12 Nov 2024 10:37:05 +0000 Message-Id: <20241112103717.589952-7-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023819_789776_73CB5266 X-CRM114-Status: GOOD ( 15.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: James Clark FEAT_TRF can control trace generation at different ELs so this will enable support of exclude/include guest rules when it's present without TRBE. With TRBE we'll have to continue to always disable guest trace. Signed-off-by: James Clark Signed-off-by: James Clark --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/debug.c | 14 ++++++++++---- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 61ff34e1ffef..5dfc3f4f74b2 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -939,6 +939,8 @@ struct kvm_vcpu_arch { #define HOST_FEAT_HAS_SPE __kvm_single_flag(feats, BIT(0)) /* Save TRBE context if active */ #define HOST_FEAT_HAS_TRBE __kvm_single_flag(feats, BIT(1)) +/* CPU has Feat_TRF */ +#define HOST_FEAT_HAS_TRF __kvm_single_flag(feats, BIT(2)) /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index cf5558806687..fb41ef5d9db9 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -89,10 +89,16 @@ void kvm_arm_init_debug(void) !(read_sysreg_s(SYS_PMBIDR_EL1) & BIT(PMBIDR_EL1_P_SHIFT))) host_data_set_flag(HOST_FEAT_HAS_SPE); - /* Check if we have TRBE implemented and available at the host */ - if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceBuffer_SHIFT) && - !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_EL1_P)) - host_data_set_flag(HOST_FEAT_HAS_TRBE); + if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceFilt_SHIFT)) { + host_data_set_flag(HOST_FEAT_HAS_TRF); + /* + * The architecture mandates FEAT_TRF with TRBE, so only need to check + * for TRBE if TRF exists. + */ + if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_TraceBuffer_SHIFT) && + !(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_EL1_P)) + host_data_set_flag(HOST_FEAT_HAS_TRBE); + } } /** From patchwork Tue Nov 12 10:37:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872122 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 980E7D32D98 for ; Tue, 12 Nov 2024 11:32:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=LV3nsyw5404Uo8G1nSnnmKQ1QdbiAR4HgkQNY6sysiA=; b=g4aDXK4i9ZEIenhIMj6vceuFb9 FOnyjH78feYBpirkyBbSdT9UFcINxfN0y9P7bK/CDiTxOkCDB62Amm0sEDZcH3+2muy/Ow+Uq3NFf ROeY6yIiN09Yc2iOmyE8bR201coiz++NvT2fA2NDq4IaP4aJMWGvoaWu788QyMC6H12Wsmq29lk+k cUBQgYX8Yd7JJahQNLmsv71iK42SnBkyu2U6hWGdnBIbO+osZ9rn5sk7WkQ10WFr1FGBjuXUVpYBG AireBkH9vXIhtYDnp6WawwAMS9FA6pajWt0UrIRyS2zJUDtpbKNOc36QCvIuLtzRuom2J0X04T964 VAp2JGnw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tAp7m-00000003EbH-0eBW; Tue, 12 Nov 2024 11:31:58 +0000 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoI0-000000034nV-0L8Q for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:38:29 +0000 Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-43159c9f617so43184595e9.2 for ; Tue, 12 Nov 2024 02:38:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407907; x=1732012707; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LV3nsyw5404Uo8G1nSnnmKQ1QdbiAR4HgkQNY6sysiA=; b=MDf3Pqs15dtPRdi/VXiQ3Dy0Fjg/YMzZB9cKTj9Ksy8nsYPHzmfCgLZtrknnOVD22R Uh2DKXE3SJkOrImjawdQM3TNjykiAymmxkTwsmwdXyHoesGYxeaXe3mlmkVIhac/4Qen UjtrVs6t/CKJX8yY8XPdsL3QqjLQ7iqNdcYOZfklJIZ6JPyPu56c2G/PhBuqqrij6A/U q/bZmqHkhqWE45IT5i/J9iCPXRJB4LlJYadohqEgGIkAxUpW0vIrOr8jzBGtYPSs/tGT 5gVWKf+7wXKP+duxWDtdkUU5/FU318/8vMVUPy70y84Lw4EUd6Ef88aWQgQbQ1S9IZg3 zjtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407907; x=1732012707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LV3nsyw5404Uo8G1nSnnmKQ1QdbiAR4HgkQNY6sysiA=; b=TFUGaCbutMDKUCulWMzbB11JlYeRt0Wc70xkPdtI+QmhNfD7g0rkkCXVn56ZE0UrJs 2iebKVSMQCEEfcD0Sql8JEFwBQzS8ac0CQIGJYUu13BDiaKIcKqh8EtS7YGLkkK9nBhW sMEOrmZC0XwY1+epoDuvRBMhDvC/7/vEanwGBJLhRJSosQF7HvVDrJyKAlj+iA9Wf6mo Ckkt7lXHq1PDawPa+WDlUvbIhQoxsxTCZ4G7JTRHcbsYqsOPhIDvQ/GlXFs4yy/eZbzw YWnZ56EoabUxQ1qSjVMcNVhCwc0O8hoOy1GEdYPgnrQ7r0NLg3QaLLslJZXEe4xiEYGe P6sQ== X-Forwarded-Encrypted: i=1; AJvYcCXGf+HrujzDHh0ytj/BkNYlOVtabnbc/KXVJyIrcTXHhN7cp/vYoHuRo8K4iiokuDYwnJIWhu51D/NNytwQdGYe@lists.infradead.org X-Gm-Message-State: AOJu0YwRZa0Cd48nRWgo6ijUvSBnOnoywzN0Y8B0i334TEK5T4Ovi9is HZbL0mn9aIigjCk0ssFhwp7aVhSWHNHKzweKBYXBt0qRHN7c7OM4VIpCByQozlI= X-Google-Smtp-Source: AGHT+IE1wDzy/6MnLRWAcMttuy2R6a5vKVFmcLrn5P5890TBsIGhMnK8XJK5ODuYyXhsBnu+l89pAA== X-Received: by 2002:a05:600c:5127:b0:431:5ba2:2450 with SMTP id 5b1f17b1804b1-432b751e264mr133161105e9.33.1731407906666; Tue, 12 Nov 2024 02:38:26 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.38.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:38:26 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Anshuman Khandual , Fuad Tabba , James Morse , Shiqi Liu , Mark Brown , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 07/12] KVM: arm64: arm_spe: Give SPE enabled state to KVM Date: Tue, 12 Nov 2024 10:37:06 +0000 Message-Id: <20241112103717.589952-8-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023828_180189_DD2BC83E X-CRM114-Status: GOOD ( 20.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently in nVHE, KVM has to check if SPE is enabled on every guest switch even if it was never used. Because it's a debug feature and is more likely to not be used than used, give KVM the SPE buffer status to allow a much simpler and faster do-nothing path in the hyp. This is always called with preemption disabled except for probe/hotplug which gets wrapped with preempt_disable(). Signed-off-by: James Clark --- arch/arm64/include/asm/kvm_host.h | 6 ++++++ arch/arm64/kvm/debug.c | 29 +++++++++++++++++++++++++++++ drivers/perf/arm_spe_pmu.c | 13 +++++++++++-- 3 files changed, 46 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5dfc3f4f74b2..7f1e32d40f0c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -641,6 +641,8 @@ struct kvm_host_data { struct { /* Host CPU features, set at init */ u8 feats; + /* Host CPU state */ + u8 state; } flags; /* @@ -941,6 +943,8 @@ struct kvm_vcpu_arch { #define HOST_FEAT_HAS_TRBE __kvm_single_flag(feats, BIT(1)) /* CPU has Feat_TRF */ #define HOST_FEAT_HAS_TRF __kvm_single_flag(feats, BIT(2)) +/* PMBLIMITR_EL1_E is set (SPE profiling buffer enabled) */ +#define HOST_STATE_SPE_EN __kvm_single_flag(state, BIT(0)) /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ @@ -1382,6 +1386,7 @@ static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); void kvm_clr_pmu_events(u64 clr); bool kvm_set_pmuserenr(u64 val); +void kvm_set_pmblimitr(u64 pmblimitr); #else static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr) {} static inline void kvm_clr_pmu_events(u64 clr) {} @@ -1389,6 +1394,7 @@ static inline bool kvm_set_pmuserenr(u64 val) { return false; } +static inline void kvm_set_pmblimitr(u64 pmblimitr) {} #endif void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index fb41ef5d9db9..ed3b4d057c52 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -335,3 +335,32 @@ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) } } } + +static bool kvm_arm_skip_trace_state(void) +{ + /* pKVM hyp finds out the state for itself */ + if (is_protected_kvm_enabled()) + return true; + + /* Make sure state gets there in one piece */ + if (WARN_ON_ONCE(preemptible())) + return true; + + return false; +} + +void kvm_set_pmblimitr(u64 pmblimitr) +{ + /* Only read in nVHE */ + if (has_vhe()) + return; + + if (kvm_arm_skip_trace_state()) + return; + + if (pmblimitr & PMBLIMITR_EL1_E) + host_data_set_flag(HOST_STATE_SPE_EN); + else + host_data_clear_flag(HOST_STATE_SPE_EN); +} +EXPORT_SYMBOL_GPL(kvm_set_pmblimitr); diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c index 3569050f9cf3..6a79df363aa6 100644 --- a/drivers/perf/arm_spe_pmu.c +++ b/drivers/perf/arm_spe_pmu.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -496,6 +497,12 @@ static u64 arm_spe_pmu_next_off(struct perf_output_handle *handle) return limit; } +static void arm_spe_write_pmblimitr(u64 val) +{ + write_sysreg_s(val, SYS_PMBLIMITR_EL1); + kvm_set_pmblimitr(val); +} + static void arm_spe_perf_aux_output_begin(struct perf_output_handle *handle, struct perf_event *event) { @@ -524,7 +531,7 @@ static void arm_spe_perf_aux_output_begin(struct perf_output_handle *handle, write_sysreg_s(base, SYS_PMBPTR_EL1); out_write_limit: - write_sysreg_s(limit, SYS_PMBLIMITR_EL1); + arm_spe_write_pmblimitr(limit); } static void arm_spe_perf_aux_output_end(struct perf_output_handle *handle) @@ -552,7 +559,7 @@ static void arm_spe_pmu_disable_and_drain_local(void) dsb(nsh); /* Disable the profiling buffer */ - write_sysreg_s(0, SYS_PMBLIMITR_EL1); + arm_spe_write_pmblimitr(0); isb(); } @@ -1095,7 +1102,9 @@ static void __arm_spe_pmu_reset_local(void) * This is probably overkill, as we have no idea where we're * draining any buffered data to... */ + preempt_disable(); arm_spe_pmu_disable_and_drain_local(); + preempt_enable(); /* Reset the buffer base pointer */ write_sysreg_s(0, SYS_PMBPTR_EL1); From patchwork Tue Nov 12 10:37:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872125 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 33EC4D32D96 for ; Tue, 12 Nov 2024 11:34:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=5PUUdanfV8/dMTaROo3OjL1mScK+2ashhKsQ/yXMrNE=; b=EO6wQhR4ffs3qh0HTaojvaLc9A QoQapqle2zavUW2LLgoy/LkRKXOfH78inaHSTi23T1eKnQLg7jWuMdWCc/Xyj+/nDo5xA22F6+HKS hPwZF6RG3vUy/iyoEMFrpauJ/S743BsAksTh94yqYN2+VfSpJAFTlKRsLGthkdchc7TmROQbj+q0t oTWgxjgMAf95AK+EuR218FgwTinZyFfZpU9UxZtXgrIfPDjar6xzmmykmJTJsgqt90EOJDrK2NH5P QSZebWz88XK2s+CQrWp8qmH0Peyg4ohkXrURf7GEwA5nkrqOSMD2QsiMOPxNHCYjb9AAF1XrDG8th ISdzQpeA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tAp9X-00000003Eyy-46r1; Tue, 12 Nov 2024 11:33:47 +0000 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoI9-000000034pg-152E for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:38:38 +0000 Received: by mail-wm1-x331.google.com with SMTP id 5b1f17b1804b1-43159c9f617so43185705e9.2 for ; Tue, 12 Nov 2024 02:38:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407915; x=1732012715; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5PUUdanfV8/dMTaROo3OjL1mScK+2ashhKsQ/yXMrNE=; b=eUR/YeNMWQwvCyDntFrDa2ohHfX87YWCCvenDxaJB5MAkEOMcvyzAUv1ezKzvqkRH/ iNNPyaioEi9k2gGKSGlm31UT9v6xofjglWFIS2JF7z5/WQNnvtPCdOPKdnDb62mEVBQ2 zupKV4vNCUWuLv7VnVocOEw5fGggNCiEk9X8VJsBWDwaw9avyoOJB5v98lfTeEPjKPE/ ppZlCXkxyZn2uHcl0IAEsj4FMfgRHUl9vby3fFSnTn5QkWFfctUnZogEKAgV+WbV75wN ZhQOId/HOYds5L+mN/tyYdQck49vrhz4suSjp1eSg+UXC5XDsmc15sbnOIHRlVDZXfTx PQhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407915; x=1732012715; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5PUUdanfV8/dMTaROo3OjL1mScK+2ashhKsQ/yXMrNE=; b=Thk5lVd/lL7f+0nQO2l3BNUWOpHDIDkTFSD4QkRwNztBhC5ZacluCIWUxbUeqUKN78 PAdiISLcOR+UNtnAf7HFvvXdhP+YKO4adXay4//89fdkjbHsEfL6GMlJ7g9s4B8Z2nI8 5jNyRJKA7ApItImLp8928zu5eJHWOjtFYSf1L+ZdGEqWiqzpQX5sHWi27N0G3erBP62o 5PBI1Ac4dbH56RA64hX95mHqkOe/gMJwtnyyb3aAToBRJCyHGrmeEibbGjsSqGxOxB8m Uz9jV5fjZ3sfzCq4hYQd7fr/Of6QiTXKa4OKiuRpoaFeHVeyKADUMbqm+e4Xx4Ra1BKA VUBg== X-Forwarded-Encrypted: i=1; AJvYcCV9gX9vJxagoL1zzrmvzbJJ7Rz2tZ13BFifncqM+JvJ17s2YawxFXCTSCbbciFHLmci6czFJHmZkCDAwDSVCVjB@lists.infradead.org X-Gm-Message-State: AOJu0YzO09vp686hBimaA+lZPggCZJXQVlNy0ikZx5Qp/Xwp1/m70MWS m+TcOHUhaXw6mi/b6AlzptZsN5Wx54prgcDTvvi2ioooP0MRqTn8KuNLF2f3GGg= X-Google-Smtp-Source: AGHT+IGRUXiwIphPDLp8SPINsrbnyjqSeYnthdnyXbEy6DX7IRif2HAVNDMi2ra2ERahPrHHXoqCRw== X-Received: by 2002:a05:600c:35d5:b0:431:547e:81d0 with SMTP id 5b1f17b1804b1-432b7503749mr126843845e9.11.1731407915453; Tue, 12 Nov 2024 02:38:35 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.38.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:38:34 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Anshuman Khandual , "Rob Herring (Arm)" , Shiqi Liu , Fuad Tabba , James Morse , Mark Brown , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 08/12] KVM: arm64: Don't hit sysregs to see if SPE is enabled or not Date: Tue, 12 Nov 2024 10:37:07 +0000 Message-Id: <20241112103717.589952-9-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023837_320363_D168BA94 X-CRM114-Status: GOOD ( 22.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that the driver tells us whether SPE was used or not we can use that. Except in pKVM where the host isn't trusted we keep the existing feature + sysreg check. The unconditional zeroing of pmscr_el1 if nothing is saved can also be dropped. Zeroing it after the restore has the same effect, but only incurs the write if it was actually enabled. Now in the normal nVHE case, SPE saving is gated by a single flag read on kvm_host_data. Signed-off-by: James Clark --- arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/kvm/hyp/nvhe/debug-sr.c | 52 ++++++++++++++++++------------ arch/arm64/kvm/hyp/nvhe/switch.c | 2 +- 3 files changed, 34 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index c838309e4ec4..4039a42ca62a 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -105,7 +105,7 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu); void __debug_switch_to_host(struct kvm_vcpu *vcpu); #ifdef __KVM_NVHE_HYPERVISOR__ -void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu); +void __debug_save_host_buffers_nvhe(void); void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu); #endif diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 89f44a51a172..578c549af3c6 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -14,24 +14,23 @@ #include #include -static void __debug_save_spe(u64 *pmscr_el1) +static bool __debug_spe_enabled(void) { - u64 reg; - - /* Clear pmscr in case of early return */ - *pmscr_el1 = 0; - /* - * At this point, we know that this CPU implements - * SPE and is available to the host. - * Check if the host is actually using it ? + * Check if the host is actually using SPE. In pKVM read the state, + * otherwise just trust that the host told us it was being used. */ - reg = read_sysreg_s(SYS_PMBLIMITR_EL1); - if (!(reg & BIT(PMBLIMITR_EL1_E_SHIFT))) - return; + if (unlikely(is_protected_kvm_enabled())) + return host_data_get_flag(HOST_FEAT_HAS_SPE) && + (read_sysreg_s(SYS_PMBLIMITR_EL1) & PMBLIMITR_EL1_E); + else + return host_data_get_flag(HOST_STATE_SPE_EN); +} - /* Yes; save the control register and disable data generation */ - *pmscr_el1 = read_sysreg_el1(SYS_PMSCR); +static void __debug_save_spe(void) +{ + /* Save the control register and disable data generation */ + *host_data_ptr(host_debug_state.pmscr_el1) = read_sysreg_el1(SYS_PMSCR); write_sysreg_el1(0, SYS_PMSCR); isb(); @@ -39,8 +38,14 @@ static void __debug_save_spe(u64 *pmscr_el1) psb_csync(); } -static void __debug_restore_spe(u64 pmscr_el1) +static void __debug_restore_spe(void) { + u64 pmscr_el1 = *host_data_ptr(host_debug_state.pmscr_el1); + + /* + * PMSCR was set to 0 to disable so if it's already 0, no restore is + * necessary. + */ if (!pmscr_el1) return; @@ -49,6 +54,13 @@ static void __debug_restore_spe(u64 pmscr_el1) /* Re-enable data generation */ write_sysreg_el1(pmscr_el1, SYS_PMSCR); + + /* + * Disable future restores until a non zero value is saved again. Since + * this is called unconditionally on exit, future register writes are + * skipped until they are needed again. + */ + *host_data_ptr(host_debug_state.pmscr_el1) = 0; } static void __debug_save_trace(u64 *trfcr_el1) @@ -79,11 +91,12 @@ static void __debug_restore_trace(u64 trfcr_el1) write_sysreg_el1(trfcr_el1, SYS_TRFCR); } -void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu) +void __debug_save_host_buffers_nvhe(void) { /* Disable and flush SPE data generation */ - if (host_data_get_flag(HOST_FEAT_HAS_SPE)) - __debug_save_spe(host_data_ptr(host_debug_state.pmscr_el1)); + if (__debug_spe_enabled()) + __debug_save_spe(); + /* Disable and flush Self-Hosted Trace generation */ if (host_data_get_flag(HOST_FEAT_HAS_TRBE)) __debug_save_trace(host_data_ptr(host_debug_state.trfcr_el1)); @@ -96,8 +109,7 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu) void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu) { - if (host_data_get_flag(HOST_FEAT_HAS_SPE)) - __debug_restore_spe(*host_data_ptr(host_debug_state.pmscr_el1)); + __debug_restore_spe(); if (host_data_get_flag(HOST_FEAT_HAS_TRBE)) __debug_restore_trace(*host_data_ptr(host_debug_state.trfcr_el1)); } diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index cc69106734ca..edd657797463 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -300,7 +300,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu) * translation regime to EL2 (via MDCR_EL2_E2PB == 0) and * before we load guest Stage1. */ - __debug_save_host_buffers_nvhe(vcpu); + __debug_save_host_buffers_nvhe(); /* * We're about to restore some new MMU state. Make sure From patchwork Tue Nov 12 10:37:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872126 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 00AFBD32D96 for ; Tue, 12 Nov 2024 11:35:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=sA/W0+Ap46dgMFolE2VZt7mXSZE4H0GHR4Td8I21hdI=; b=Wx71/vlO4mAMyQlx0tnLRxc7Hr swyaA94bq9v/27/T0cq9wwcBshCiNHN+iXEDqRvB3c0XUl8dEcaV4XY3G3PbnSYvU2tmhq0Kuhs0/ FOgazIFKeYW7FzEE0uSoxpVYDuYFNZNNTyOMgTOuRjepfrp0wg4qJNoiKQPfcnpi0wEDtHAcfQx15 Vlka7+4R3mVKj1VXHioPLlSRuDePF3CePB9DND8OY34j/PBTZk+ySfLj/GrL2lKwd+1s+WIApPjRv mweHWzgqrhpW6216QYLqs3TqVo8Bmdn955njBusYfXJmJxAl1a27+SfvGHdBt8Pns275+zl95BhGW TkGsW02w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tApBJ-00000003FMv-3DtK; Tue, 12 Nov 2024 11:35:37 +0000 Received: from mail-wm1-x329.google.com ([2a00:1450:4864:20::329]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoIH-000000034qf-0d6g for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:38:46 +0000 Received: by mail-wm1-x329.google.com with SMTP id 5b1f17b1804b1-43159c9f617so43186645e9.2 for ; Tue, 12 Nov 2024 02:38:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407924; x=1732012724; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sA/W0+Ap46dgMFolE2VZt7mXSZE4H0GHR4Td8I21hdI=; b=kHkaWOQRMS5CwTIhalIt5iBtj+fqgTNzm9qRdd9qWiqiwCK2dwu+Yj4/tBR2M/FA3p oLPnzJvpA9sWobEoIQOA0wFU32hVwqpO5riJwJXMdH6d8SKjHf9CxjKvI0XvmidjjNYd ZO2intGbYqS0haw19jeIJFoNnOWnTHu4caBAEqzgKDDwglIG4uFv4deOP880XBAPvDFc kJaEj1hMKL8bWNyjfBkimx24Zy5SH8ZOVSq8Kx+Yeqw5k+Zf5PKMcI5ZB/mxFzNb33Q6 7SS15eCWYwx0tqkUGivlAbaMTsDMwWQRv23MGXtf9669fJ3FpjfIaC9XDfCZuy+0pxYr 3NuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407924; x=1732012724; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sA/W0+Ap46dgMFolE2VZt7mXSZE4H0GHR4Td8I21hdI=; b=hZ9chWbZSGGZ37/3UqZv2mtvacTLqTa2S9jI85UqOkV59tqLMfpaQeCTGlinTnOZS4 0ZT0VB8t+VFXa0yAdPLMMEz1nSrdc2ywz612UxvcNWxN2EqT4CFb9C8GUpFI1CFj1MTi 8N0Xxwqf8XF815U33bk/DELxEnIGL9h1w9UtgpGjP9cdpTA+lkfe0LC4WPi1JcQovHwh uigFQFWvsvZJCU3HgHPfdXxTdPTiCsLQ+G7hf0RlXKlbKF1yJnuw3nvik7BdcvDaeagu 3CfUhWTpZIshFaQp8TY8ZUDMHG9DomBLfJv4v7v8XQ1VMyYg2HarvJh26Jk+Kf9Uk6GK Cfzw== X-Forwarded-Encrypted: i=1; AJvYcCVp1mC2BwMe93vvQ3DJz2hyRoNHZ/T7MMN7r3//kNUinmiyi33ZyppgaCY2Zb1PMSrbvNzXUoAbF5l4qrUcE8Sm@lists.infradead.org X-Gm-Message-State: AOJu0YzV04hZdMoCpY7mPUC66adVEkOTGShKks3gsEhr+FZMEx8uEEhK EFtKX6/6rJMHLkMZlqsfy99PNCb2c6MU4R4sagxw5tT7kxLSAjcWmeLp48cksZs= X-Google-Smtp-Source: AGHT+IGTCpYEG2uHYkmh8Yj6DuYaWz+qEgTFVaZXTljNTeTGXpI0pqO9pJabhlxETKSZqiOHdB14Pg== X-Received: by 2002:a05:600c:190f:b0:42f:7e87:3438 with SMTP id 5b1f17b1804b1-432b749ee57mr144547575e9.0.1731407923685; Tue, 12 Nov 2024 02:38:43 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.38.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:38:43 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Anshuman Khandual , James Morse , Shiqi Liu , Fuad Tabba , Mark Brown , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 09/12] KVM: arm64: coresight: Give TRBE enabled state to KVM Date: Tue, 12 Nov 2024 10:37:08 +0000 Message-Id: <20241112103717.589952-10-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023845_233075_2E57D437 X-CRM114-Status: GOOD ( 17.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently in nVHE, KVM has to check if TRBE is enabled on every guest switch even if it was never used. Because it's a debug feature and is more likely to not be used than used, give KVM the TRBE buffer status to allow a much simpler and faster do-nothing path in the hyp. This is always called with preemption disabled except for probe/hotplug which gets wrapped with preempt_disable(). Signed-off-by: James Clark --- arch/arm64/include/asm/kvm_host.h | 4 ++++ arch/arm64/kvm/debug.c | 16 ++++++++++++++++ drivers/hwtracing/coresight/coresight-trbe.c | 15 ++++++++++++--- 3 files changed, 32 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 7f1e32d40f0c..b1dccac996a6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -945,6 +945,8 @@ struct kvm_vcpu_arch { #define HOST_FEAT_HAS_TRF __kvm_single_flag(feats, BIT(2)) /* PMBLIMITR_EL1_E is set (SPE profiling buffer enabled) */ #define HOST_STATE_SPE_EN __kvm_single_flag(state, BIT(0)) +/* TRBLIMITR_EL1_E is set (TRBE trace buffer enabled) */ +#define HOST_STATE_TRBE_EN __kvm_single_flag(state, BIT(1)) /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ @@ -1387,6 +1389,7 @@ void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); void kvm_clr_pmu_events(u64 clr); bool kvm_set_pmuserenr(u64 val); void kvm_set_pmblimitr(u64 pmblimitr); +void kvm_set_trblimitr(u64 trblimitr); #else static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr) {} static inline void kvm_clr_pmu_events(u64 clr) {} @@ -1395,6 +1398,7 @@ static inline bool kvm_set_pmuserenr(u64 val) return false; } static inline void kvm_set_pmblimitr(u64 pmblimitr) {} +static inline void kvm_set_trblimitr(u64 trblimitr) {} #endif void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index ed3b4d057c52..e99df2c3f62a 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -364,3 +364,19 @@ void kvm_set_pmblimitr(u64 pmblimitr) host_data_clear_flag(HOST_STATE_SPE_EN); } EXPORT_SYMBOL_GPL(kvm_set_pmblimitr); + +void kvm_set_trblimitr(u64 trblimitr) +{ + /* Only read in nVHE */ + if (has_vhe()) + return; + + if (kvm_arm_skip_trace_state()) + return; + + if (trblimitr & TRBLIMITR_EL1_E) + host_data_set_flag(HOST_STATE_TRBE_EN); + else + host_data_clear_flag(HOST_STATE_TRBE_EN); +} +EXPORT_SYMBOL_GPL(kvm_set_trblimitr); diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c index 96a32b213669..ff281b445682 100644 --- a/drivers/hwtracing/coresight/coresight-trbe.c +++ b/drivers/hwtracing/coresight/coresight-trbe.c @@ -18,6 +18,7 @@ #include #include #include +#include #include "coresight-self-hosted-trace.h" #include "coresight-trbe.h" @@ -213,6 +214,12 @@ static inline void trbe_drain_buffer(void) dsb(nsh); } +static void trbe_write_trblimitr(u64 val) +{ + write_sysreg_s(val, SYS_TRBLIMITR_EL1); + kvm_set_trblimitr(val); +} + static inline void set_trbe_enabled(struct trbe_cpudata *cpudata, u64 trblimitr) { /* @@ -220,7 +227,7 @@ static inline void set_trbe_enabled(struct trbe_cpudata *cpudata, u64 trblimitr) * might be required for fetching the buffer limits. */ trblimitr |= TRBLIMITR_EL1_E; - write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); + trbe_write_trblimitr(trblimitr); /* Synchronize the TRBE enable event */ isb(); @@ -238,7 +245,7 @@ static inline void set_trbe_disabled(struct trbe_cpudata *cpudata) * might be required for fetching the buffer limits. */ trblimitr &= ~TRBLIMITR_EL1_E; - write_sysreg_s(trblimitr, SYS_TRBLIMITR_EL1); + trbe_write_trblimitr(trblimitr); if (trbe_needs_drain_after_disable(cpudata)) trbe_drain_buffer(); @@ -253,8 +260,10 @@ static void trbe_drain_and_disable_local(struct trbe_cpudata *cpudata) static void trbe_reset_local(struct trbe_cpudata *cpudata) { + preempt_disable(); trbe_drain_and_disable_local(cpudata); - write_sysreg_s(0, SYS_TRBLIMITR_EL1); + trbe_write_trblimitr(0); + preempt_enable(); write_sysreg_s(0, SYS_TRBPTR_EL1); write_sysreg_s(0, SYS_TRBBASER_EL1); write_sysreg_s(0, SYS_TRBSR_EL1); From patchwork Tue Nov 12 10:37:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872127 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E15C7D32D9C for ; Tue, 12 Nov 2024 11:37:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=w4Y12830R+W7ydXEFfviSLc+JWTSWnSDcOvb7pMp1YA=; b=zyjts06ABto0R8WDnA9GSWRkF6 lmTgNNO9j/VwWFiDkdN89HJq4ZNHRKKJ7760YI/XiUpKtiBndRDdaGkm1XFtTYdGPrnMdeGPGqnpx RslYYIteNty+6Mf9cRdJH1geLgy9wWxqKPm/EzyL1sJlKDaqO/Nq8iiq32fhl86jm9Pl9YyC+R9E5 cJ+hbuye7skmKcbhSA5JyljEMmRLtBkj2ryV341xvnoBUijVE//Q1hXk11SphWSAkAvq/nJoDHBz7 yBBClQXfmKA7fK5JfI1/9G6q3xnixxyuW1CshGvqM1qvml/3tynJ70QW//b7Mb3qhBmLtgiuz3o68 wOilPmbQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tApD4-00000003Fd7-1GGq; Tue, 12 Nov 2024 11:37:26 +0000 Received: from mail-wr1-x42a.google.com ([2a00:1450:4864:20::42a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoIR-000000034s8-2k6c for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:38:57 +0000 Received: by mail-wr1-x42a.google.com with SMTP id ffacd0b85a97d-37d50fad249so4105871f8f.1 for ; Tue, 12 Nov 2024 02:38:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407934; x=1732012734; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=w4Y12830R+W7ydXEFfviSLc+JWTSWnSDcOvb7pMp1YA=; b=VVmFUv9iqeB6liE7R0FYQxlPXOw0JvA1fCMwyXpRLCFp2Oge0r/sx16OGTBOQNjI1P pjOQGgAFLcPuUvG7smO3zMpNxzXwMVRaLRKOJEKHsjx3d0bzNY15qegJywHAsIJO2NMk MV+jC48EBrclDufNC0yDB5mui5bt2MtuMXVJESO7FuKgdc2FaMNlsNmghg75njYyyXIq nuoTQmNg+Niwk+sPsAF5PpUNlYeVH2yXtOTO5LEeRtvX3dSRIvUCZlFLF+zxwuFbGKKt L+3TXwVWj3mZ+Rp97T4O+O0uYym+q4plN9EtPNM/oj8fJ3P8sjuH5b7rKmYf9HBFDxgO Posg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407934; x=1732012734; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w4Y12830R+W7ydXEFfviSLc+JWTSWnSDcOvb7pMp1YA=; b=RjhdkfG/h9r/2JZrzpejTfB5JU3BhU4Ncr53VX8Ns1tBwjHtnMirnqy7U5+p5oGrN9 K2I+oYovaCJI9/u2VG0534syBXiYBzx8zrau1AfauubpYnFIja8t1doF0t8LFszLSzmb ozMndkeWwYAYNjazVPFIPHX1fR5kkReYw0Arj/t1fmwY1lgGG6JZazMTPff85DgDL7Ef Yb3f8d8XIHQGecwMTZ5TLLILpdMtaycNVFQW+uLTIU3+f1YyRZ28NzfnS1iSO40wGK3j O2QhHuV1hTfHOSBPH97vwNZhcbbqlsNsjN3/Lel/HqWCV4KVhUyxU2IGnAc9dHv1d8Iu 0R9Q== X-Forwarded-Encrypted: i=1; AJvYcCWZt1wUrwqiv1YEXeKix+mC3Qoch9XogMFhjLh7m+Igeu+Py4xFPSOVquxg+0Hnt3qeMbKR5Ya4f1yLaWEbM0H0@lists.infradead.org X-Gm-Message-State: AOJu0YyfcqSlEMxkaYoSWb+4H4K2N6+k9peKW7HvLY5QKaseeXMDwtPM 2cIcjRVnCmsZDJ15sPfVL7QNbA5CL6WoMzOANEV8xbeaxA5KNrS75XYMs3SA66Y= X-Google-Smtp-Source: AGHT+IFFrkKNgQSMLjx9o9fyzR37XC2Gr6CvPJDOYVlojhp8ArNiU1b7bN0q0qpEMeG5HrMUFygnww== X-Received: by 2002:a5d:6c64:0:b0:37d:4a82:6412 with SMTP id ffacd0b85a97d-381f1885db0mr12627127f8f.46.1731407933761; Tue, 12 Nov 2024 02:38:53 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.38.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:38:53 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Mark Brown , Anshuman Khandual , James Morse , Shiqi Liu , Fuad Tabba , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 10/12] KVM: arm64: Don't hit sysregs to see if TRBE is enabled or not Date: Tue, 12 Nov 2024 10:37:09 +0000 Message-Id: <20241112103717.589952-11-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023855_718456_ACC47C41 X-CRM114-Status: GOOD ( 21.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that the driver tells us whether TRBE was used or not we can use that. Except in pKVM where the host isn't trusted we keep the existing feature + sysreg check. Now in the normal nVHE case, TRBE save and restore are gated by flag checks on kvm_host_data. Instead of using a magic value of host_debug_state.trfcr_el1 to determine whether to restore, add a flag. This will also simplify the logic in the next commit where restoration but no disabling is required. Signed-off-by: James Clark --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/kvm/hyp/nvhe/debug-sr.c | 51 +++++++++++++++++++++++------- 2 files changed, 41 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index b1dccac996a6..a8846689512b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -947,6 +947,8 @@ struct kvm_vcpu_arch { #define HOST_STATE_SPE_EN __kvm_single_flag(state, BIT(0)) /* TRBLIMITR_EL1_E is set (TRBE trace buffer enabled) */ #define HOST_STATE_TRBE_EN __kvm_single_flag(state, BIT(1)) +/* Hyp modified TRFCR */ +#define HOST_STATE_RESTORE_TRFCR __kvm_single_flag(state, BIT(2)) /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 578c549af3c6..17c23e52f5f4 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -63,32 +63,55 @@ static void __debug_restore_spe(void) *host_data_ptr(host_debug_state.pmscr_el1) = 0; } -static void __debug_save_trace(u64 *trfcr_el1) +static bool __debug_should_save_trace(void) { - *trfcr_el1 = 0; + /* pKVM reads the state for itself rather than trusting the host */ + if (unlikely(is_protected_kvm_enabled())) { + /* Always disable any trace regardless of TRBE */ + if (read_sysreg_el1(SYS_TRFCR) & + (TRFCR_ELx_E0TRE | TRFCR_ELx_ExTRE)) + return true; + + /* + * Trace could already be disabled but TRBE buffer + * might still need to be drained if it was in use. + */ + if (host_data_get_flag(HOST_FEAT_HAS_TRBE)) + return read_sysreg_s(SYS_TRBLIMITR_EL1) & + TRBLIMITR_EL1_E; + } + + return host_data_get_flag(HOST_STATE_TRBE_EN); +} - /* Check if the TRBE is enabled */ - if (!(read_sysreg_s(SYS_TRBLIMITR_EL1) & TRBLIMITR_EL1_E)) - return; +static void __debug_save_trace(void) +{ /* * Prohibit trace generation while we are in guest. * Since access to TRFCR_EL1 is trapped, the guest can't * modify the filtering set by the host. */ - *trfcr_el1 = read_sysreg_el1(SYS_TRFCR); + *host_data_ptr(host_debug_state.trfcr_el1) = read_sysreg_el1(SYS_TRFCR); write_sysreg_el1(0, SYS_TRFCR); isb(); /* Drain the trace buffer to memory */ tsb_csync(); + + host_data_set_flag(HOST_STATE_RESTORE_TRFCR); } -static void __debug_restore_trace(u64 trfcr_el1) +static void __debug_restore_trace(void) { - if (!trfcr_el1) + u64 trfcr_el1; + + if (!host_data_get_flag(HOST_STATE_RESTORE_TRFCR)) return; /* Restore trace filter controls */ + trfcr_el1 = *host_data_ptr(host_debug_state.trfcr_el1); + *host_data_ptr(host_debug_state.trfcr_el1) = read_sysreg_el1(SYS_TRFCR); write_sysreg_el1(trfcr_el1, SYS_TRFCR); + host_data_clear_flag(HOST_STATE_RESTORE_TRFCR); } void __debug_save_host_buffers_nvhe(void) @@ -97,9 +120,14 @@ void __debug_save_host_buffers_nvhe(void) if (__debug_spe_enabled()) __debug_save_spe(); + /* Any trace filtering requires TRFCR register */ + if (!host_data_get_flag(HOST_FEAT_HAS_TRF)) + return; + /* Disable and flush Self-Hosted Trace generation */ - if (host_data_get_flag(HOST_FEAT_HAS_TRBE)) - __debug_save_trace(host_data_ptr(host_debug_state.trfcr_el1)); + if (__debug_should_save_trace()) + __debug_save_trace(); + } void __debug_switch_to_guest(struct kvm_vcpu *vcpu) @@ -110,8 +138,7 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu) void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu) { __debug_restore_spe(); - if (host_data_get_flag(HOST_FEAT_HAS_TRBE)) - __debug_restore_trace(*host_data_ptr(host_debug_state.trfcr_el1)); + __debug_restore_trace(); } void __debug_switch_to_host(struct kvm_vcpu *vcpu) From patchwork Tue Nov 12 10:37:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BA722D32D96 for ; Tue, 12 Nov 2024 11:39:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=hTXqo//8eCQsBk4kbAKUkYayo6ux0h+eu3YzsPig2os=; b=fLX+IwKh3XJQbZ0+IkjjoFKeqi Ih73Q0qQcU81ln8rRLAmPcJhS96s1U1Iv2OThyWZugu7Tc9AdUsVPeDcLB2QJm5Q8gseHwg3s2fpI z7XaLdCGX1jDTCpDzsIufHuW/kjfxHFINC80+EbqHeLBR3fswWxMzv2PF75gTDcHLq0JMyaYm958O RNVhpAAQYO3rZZVRSOuCjvLJIGW958XI9FNPezz+oVNln0/3j5AjN/IaMkghKAb3lGdCWz9NOsKSl dRv0coDYAoJ+QeYYfyVET2ixGlbJdiz6J4/jWwC4VCZJ2UdCy/qQzVJFMt/uSb5bb1OUj27oSPi7g xVnW9NMw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tApEq-00000003Ftz-32Qk; Tue, 12 Nov 2024 11:39:16 +0000 Received: from mail-wr1-x436.google.com ([2a00:1450:4864:20::436]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoIZ-000000034u5-3on4 for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:39:05 +0000 Received: by mail-wr1-x436.google.com with SMTP id ffacd0b85a97d-37d47eff9acso3341283f8f.3 for ; Tue, 12 Nov 2024 02:39:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407942; x=1732012742; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hTXqo//8eCQsBk4kbAKUkYayo6ux0h+eu3YzsPig2os=; b=bfj4cLh9QBtAxWWhQenJrKlhGHQFaV42P6pzP+1kPFkZlEcVJvtvuEkWHaibVIUGFP u4nKVjdqi8hw/En2ZuCVqNKu4Rzu1/cziuFXMbfEF4S4YoVuJT+ZGu5xvwYeblqIR/Gg ZmlIeP4pNSFVJW9V13eWyN4AbTpDOrCd6VuzwPRKIQs6ZQi6DqTdl5rjroJx05xchEsA kDZMJcrPSucmKMYVLGrcAGcsbH/vKmi/cXIH8XenDjWlgTpTu57RZgn3OdTKc03qvRCN SN8LvTuVNrN9f1vBsEebSRM0/68Hx2ZBc3z8Crx8yiySlutPNmNeiRvaGwH1oHc7ML/4 U42g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407942; x=1732012742; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hTXqo//8eCQsBk4kbAKUkYayo6ux0h+eu3YzsPig2os=; b=Hdcw9Gf7rhfTmvfDAeQRMIyvSLrXLVIvvoTpHaBf3T8LljuPGycuZICK+6MnvDApKC Xn9KM19hy/btygeFmvIxs6Mw3wXL2PpXvJhbNXNuh/swcYc1zncR3zo07R2Vri6sC9Hu gpScbS0K9zZGDt1Y6KiIDvjehBi4cj0N+ZHo2I8PwOgmvoxeiBeYNOVGqyrnPO68SqcZ yNtH+o8UR3rNv/7Wbr1S1sc6ePrFEAZZDrhuI3a+EEn5+TlBityaGa8ZnTv78cyLeVjo Gsw4kr57HKJjiiVXua2oL78qqC2qZcmQeeXHvVRQA9M8G6noMHAdd1uBnSWrgXFkT/np PjAw== X-Forwarded-Encrypted: i=1; AJvYcCX1bmRUa5uG2/hAKM921VgRfSrD6d3ded5AI56lv7DQheDjBgd08iNHU3w4A/ELofciRH0Ivgj2PyEPnJlt5Ntg@lists.infradead.org X-Gm-Message-State: AOJu0YzTZtRxIGCDnrT5BcVY3E7J23cSiZQjU4WytOVlDkBcZzuIHUh8 0EHsFJYimXO9BvsLUxeyzPNgF7sNK8z6gmT/tmhNdCkvqKBp0BsGBG7UwyTPtVE= X-Google-Smtp-Source: AGHT+IFyMdF9zAJjuyhAjQDTA00d8BetxuafJaMqAy1S8MkOFS5uQVVLGbU/acSR5z7cyBVpLtP1iA== X-Received: by 2002:a5d:5f42:0:b0:37d:4436:4505 with SMTP id ffacd0b85a97d-381f186ccdemr13405089f8f.32.1731407942122; Tue, 12 Nov 2024 02:39:02 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.39.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:39:01 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Anshuman Khandual , "Rob Herring (Arm)" , Shiqi Liu , Fuad Tabba , James Morse , Mark Brown , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 11/12] KVM: arm64: Swap TRFCR on guest switch Date: Tue, 12 Nov 2024 10:37:10 +0000 Message-Id: <20241112103717.589952-12-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023903_979852_5294E488 X-CRM114-Status: GOOD ( 17.45 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This implements exclude/include guest rules of the active tracing session. Only do it if a different value is required for the guest, otherwise the filters remain untouched. In VHE we can just directly write the value. Signed-off-by: James Clark --- arch/arm64/include/asm/kvm_host.h | 4 ++++ arch/arm64/kvm/debug.c | 16 ++++++++++++++++ arch/arm64/kvm/hyp/nvhe/debug-sr.c | 17 +++++++++++++++-- 3 files changed, 35 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a8846689512b..9109d10c656e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -949,6 +949,8 @@ struct kvm_vcpu_arch { #define HOST_STATE_TRBE_EN __kvm_single_flag(state, BIT(1)) /* Hyp modified TRFCR */ #define HOST_STATE_RESTORE_TRFCR __kvm_single_flag(state, BIT(2)) +/* Host wants a different trace filters for the guest */ +#define HOST_STATE_SWAP_TRFCR __kvm_single_flag(state, BIT(3)) /* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ @@ -1392,6 +1394,7 @@ void kvm_clr_pmu_events(u64 clr); bool kvm_set_pmuserenr(u64 val); void kvm_set_pmblimitr(u64 pmblimitr); void kvm_set_trblimitr(u64 trblimitr); +void kvm_set_trfcr(u64 host_trfcr, u64 guest_trfcr); #else static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr) {} static inline void kvm_clr_pmu_events(u64 clr) {} @@ -1401,6 +1404,7 @@ static inline bool kvm_set_pmuserenr(u64 val) } static inline void kvm_set_pmblimitr(u64 pmblimitr) {} static inline void kvm_set_trblimitr(u64 trblimitr) {} +static inline void kvm_set_trfcr(u64 host_trfcr, u64 guest_trfcr) {} #endif void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index e99df2c3f62a..9acec1b67d5f 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -380,3 +380,19 @@ void kvm_set_trblimitr(u64 trblimitr) host_data_clear_flag(HOST_STATE_TRBE_EN); } EXPORT_SYMBOL_GPL(kvm_set_trblimitr); + +void kvm_set_trfcr(u64 host_trfcr, u64 guest_trfcr) +{ + if (kvm_arm_skip_trace_state()) + return; + + if (has_vhe()) + write_sysreg_s(guest_trfcr, SYS_TRFCR_EL12); + else + if (host_trfcr != guest_trfcr) { + *host_data_ptr(host_debug_state.trfcr_el1) = guest_trfcr; + host_data_set_flag(HOST_STATE_SWAP_TRFCR); + } else + host_data_clear_flag(HOST_STATE_SWAP_TRFCR); +} +EXPORT_SYMBOL_GPL(kvm_set_trfcr); diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c index 17c23e52f5f4..47602c4d160a 100644 --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c @@ -100,6 +100,15 @@ static void __debug_save_trace(void) host_data_set_flag(HOST_STATE_RESTORE_TRFCR); } +static void __debug_swap_trace(void) +{ + u64 trfcr = read_sysreg_el1(SYS_TRFCR); + + write_sysreg_el1(*host_data_ptr(host_debug_state.trfcr_el1), SYS_TRFCR); + *host_data_ptr(host_debug_state.trfcr_el1) = trfcr; + host_data_set_flag(HOST_STATE_RESTORE_TRFCR); +} + static void __debug_restore_trace(void) { u64 trfcr_el1; @@ -124,10 +133,14 @@ void __debug_save_host_buffers_nvhe(void) if (!host_data_get_flag(HOST_FEAT_HAS_TRF)) return; - /* Disable and flush Self-Hosted Trace generation */ + /* + * Disable and flush Self-Hosted Trace generation for pKVM and TRBE, + * or swap if host requires different guest filters. + */ if (__debug_should_save_trace()) __debug_save_trace(); - + else if (host_data_get_flag(HOST_STATE_SWAP_TRFCR)) + __debug_swap_trace(); } void __debug_switch_to_guest(struct kvm_vcpu *vcpu) From patchwork Tue Nov 12 10:37:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Clark X-Patchwork-Id: 13872135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A982DD32D96 for ; Tue, 12 Nov 2024 11:41:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=3ypieNLjv9hoDv5n/xchGDgtSOzf0gFSsJH7dIkvwXY=; b=xezDvn+EfVHiCHQ8k6fvL3WL+B 8ZiTzJqw7xBzlqpfFWSWlzm0dMFVMu7Ta4NVDN6LolKR0WwfL85zdOyYGd9PcVWUHF2rU1Uul4KrP ayTkX+u3r+nQXp2W/8DUSMNlv1pDrMgYPM0izmXPJHRStDCVdpUIaz9FRWnH0SfMdamY2xHw/Lymc ToTeXuaCbh70Tq83hha/sqXI+TAEnBTbOvBlzkXyoDfnaVl+PvBYx2wZGIzYDFvk7e0jj4gCz6P0X UTflrSVRiAxbB1MOmp1H8vj5CB8YQS2hIVFjpPmly1+OJqBPC/Hhx77iBZKfgiiRJhBttzsv/pxSz hPxtWU9w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tApGc-00000003GDW-1bGp; Tue, 12 Nov 2024 11:41:06 +0000 Received: from mail-lf1-x134.google.com ([2a00:1450:4864:20::134]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tAoIi-000000034x2-1sJm for linux-arm-kernel@lists.infradead.org; Tue, 12 Nov 2024 10:39:13 +0000 Received: by mail-lf1-x134.google.com with SMTP id 2adb3069b0e04-539ebb5a20aso5987002e87.2 for ; Tue, 12 Nov 2024 02:39:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731407950; x=1732012750; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3ypieNLjv9hoDv5n/xchGDgtSOzf0gFSsJH7dIkvwXY=; b=UFQ1EFJd4zkV+MibcN13Yb5OWRVL+sOEaXoQxX+KyIVqtfV6pAPZ2JOjZ81Xrc+uUm ujpCItkbqP/eg2nPe4CiU890dy+TM+KbT7f1F5GODs9tW6BXUzcrToW4OlS70gR4thKB ErVm8ZWzYGTtpIFPGRyirHGbXZbPsNOQk2kayABw1uEd2tI8UfDvbXVcEchaLpW/8pBg udiPMmf7BWpuKls65gQb90ACzm4RiOu4BqLu4IrvbSm9ejSzhskCDrmPYSn5vs/piM+a jP3ZE6mW+9m/TWagsdz1YklCm2igI/YlOWf555Fo9xcTIFu9sKZxJMdZXdhaN2mr2FEw dMug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731407950; x=1732012750; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3ypieNLjv9hoDv5n/xchGDgtSOzf0gFSsJH7dIkvwXY=; b=cylJvEuhapnqnteEdsHvHzGtUrSFqa2SR83z80g2eaRuL5dHV4XbWhqh75IqcQt321 vRH0EEaRXxwAE+FhbatKGoCKpWDc2IfYPPjk7YJcY1GsOwD0OMxL4FzmvcJ9ZOyJ85RD Bv7YvXzS0s8jwHkAdsaU1O6KAFnDbG+XnA5yJ7YrnpMaf0bj7l/7Uqkr/U/1TtuUQ//H b2CMzlaOURyEZI039jp3vhJsM+bnbcfXEt7Pq+9aCdklXJaDWrFfudnTIVZOL7erd33q crp+sM9qtrKDVUx4E+cNCDyurdmgxDcufCEEmmlWGzMZEN8gtGz1Nrm/ChrtXXwmiiEO 89wg== X-Forwarded-Encrypted: i=1; AJvYcCX8q3oJ8J/7AvljZRRdwmqDqBqh7AmrD9PKMe/AB5KOEX6mxaTYgR/Nb7YMd2KKD+Gq4Jh00IVbs0Nk4qLktlU5@lists.infradead.org X-Gm-Message-State: AOJu0YxIoBzYSdJHm72DesTpwSh0iNAGBQFw1wWgjqDTh7zuAKH5Q+Au okX6Mz+xygHQtoAmhitA90akf6Mqllk+AoFAsO+yy2/rvx8DqSwQB1MDmtXtyRY= X-Google-Smtp-Source: AGHT+IHSYTMPa2/VMqq86Adc2dUVIbc8G1yKZTl0omP+kqjQjHeg+DHnbhm+3rJYolYmdWgjyHudRg== X-Received: by 2002:a05:6512:1395:b0:539:959e:f0e8 with SMTP id 2adb3069b0e04-53d862d3f7dmr8300391e87.21.1731407950470; Tue, 12 Nov 2024 02:39:10 -0800 (PST) Received: from pop-os.. ([145.224.90.214]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-432bbf436ffsm142270955e9.44.2024.11.12.02.39.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Nov 2024 02:39:10 -0800 (PST) From: James Clark To: suzuki.poulose@arm.com, oliver.upton@linux.dev, coresight@lists.linaro.org, kvmarm@lists.linux.dev Cc: James Clark , James Clark , Marc Zyngier , Joey Gouly , Zenghui Yu , Catalin Marinas , Will Deacon , Mike Leach , Alexander Shishkin , Mark Rutland , Anshuman Khandual , Fuad Tabba , Shiqi Liu , James Morse , Mark Brown , Raghavendra Rao Ananta , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 12/12] coresight: Pass guest TRFCR value to KVM Date: Tue, 12 Nov 2024 10:37:11 +0000 Message-Id: <20241112103717.589952-13-james.clark@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241112103717.589952-1-james.clark@linaro.org> References: <20241112103717.589952-1-james.clark@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241112_023912_521202_07301961 X-CRM114-Status: GOOD ( 21.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: James Clark Currently the userspace and kernel filters for guests are never set, so no trace will be generated for them. Add support for tracing guests by passing the desired TRFCR value to KVM so it can be applied to the guest. By writing either E1TRE or E0TRE, filtering on either guest kernel or guest userspace is also supported. And if both E1TRE and E0TRE are cleared when exclude_guest is set, that option is supported too. This change also brings exclude_host support which is difficult to add as a separate commit without excess churn and resulting in no trace at all. Testing ======= The addresses were counted with the following: $ perf report -D | grep -Eo 'EL2|EL1|EL0' | sort | uniq -c Guest kernel only: $ perf record -e cs_etm//Gk -a -- true 535 EL1 1 EL2 Guest user only (only 5 addresses because the guest runs slowly in the model): $ perf record -e cs_etm//Gu -a -- true 5 EL0 Host kernel only: $ perf record -e cs_etm//Hk -a -- true 3501 EL2 Host userspace only: $ perf record -e cs_etm//Hu -a -- true 408 EL0 1 EL2 Reviewed-by: Suzuki K Poulose Signed-off-by: James Clark Signed-off-by: James Clark --- .../coresight/coresight-etm4x-core.c | 43 ++++++++++++++++--- drivers/hwtracing/coresight/coresight-etm4x.h | 2 +- drivers/hwtracing/coresight/coresight-priv.h | 3 ++ 3 files changed, 41 insertions(+), 7 deletions(-) diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c index 66d44a404ad0..347dea49a996 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x-core.c +++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -271,9 +272,23 @@ static void etm4x_prohibit_trace(struct etmv4_drvdata *drvdata) /* If the CPU doesn't support FEAT_TRF, nothing to do */ if (!drvdata->trfcr) return; + + kvm_set_trfcr(0, 0); cpu_prohibit_trace(); } +static u64 etm4x_get_kern_user_filter(struct etmv4_drvdata *drvdata) +{ + u64 trfcr = drvdata->trfcr; + + if (drvdata->config.mode & ETM_MODE_EXCL_KERN) + trfcr &= ~TRFCR_ELx_ExTRE; + if (drvdata->config.mode & ETM_MODE_EXCL_USER) + trfcr &= ~TRFCR_ELx_E0TRE; + + return trfcr; +} + /* * etm4x_allow_trace - Allow CPU tracing in the respective ELs, * as configured by the drvdata->config.mode for the current @@ -286,18 +301,28 @@ static void etm4x_prohibit_trace(struct etmv4_drvdata *drvdata) */ static void etm4x_allow_trace(struct etmv4_drvdata *drvdata) { - u64 trfcr = drvdata->trfcr; + u64 trfcr, guest_trfcr; /* If the CPU doesn't support FEAT_TRF, nothing to do */ - if (!trfcr) + if (!drvdata->trfcr) return; - if (drvdata->config.mode & ETM_MODE_EXCL_KERN) - trfcr &= ~TRFCR_ELx_ExTRE; - if (drvdata->config.mode & ETM_MODE_EXCL_USER) - trfcr &= ~TRFCR_ELx_E0TRE; + if (drvdata->config.mode & ETM_MODE_EXCL_HOST) + trfcr = drvdata->trfcr & ~(TRFCR_ELx_ExTRE | TRFCR_ELx_E0TRE); + else + trfcr = etm4x_get_kern_user_filter(drvdata); write_trfcr(trfcr); + + /* Set filters for guests and pass to KVM */ + if (drvdata->config.mode & ETM_MODE_EXCL_GUEST) + guest_trfcr = drvdata->trfcr & ~(TRFCR_ELx_ExTRE | TRFCR_ELx_E0TRE); + else + guest_trfcr = etm4x_get_kern_user_filter(drvdata); + + /* TRFCR_EL1 doesn't have CX so mask it out. */ + guest_trfcr &= ~TRFCR_EL2_CX; + kvm_set_trfcr(trfcr, guest_trfcr); } #ifdef CONFIG_ETM4X_IMPDEF_FEATURE @@ -655,6 +680,12 @@ static int etm4_parse_event_config(struct coresight_device *csdev, if (attr->exclude_user) config->mode = ETM_MODE_EXCL_USER; + if (attr->exclude_host) + config->mode |= ETM_MODE_EXCL_HOST; + + if (attr->exclude_guest) + config->mode |= ETM_MODE_EXCL_GUEST; + /* Always start from the default config */ etm4_set_default_config(config); diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h index 9e9165f62e81..1119762b5cec 100644 --- a/drivers/hwtracing/coresight/coresight-etm4x.h +++ b/drivers/hwtracing/coresight/coresight-etm4x.h @@ -817,7 +817,7 @@ enum etm_impdef_type { * @s_ex_level: Secure ELs where tracing is supported. */ struct etmv4_config { - u32 mode; + u64 mode; u32 pe_sel; u32 cfg; u32 eventctrl0; diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h index 05f891ca6b5c..76403530f33e 100644 --- a/drivers/hwtracing/coresight/coresight-priv.h +++ b/drivers/hwtracing/coresight/coresight-priv.h @@ -42,6 +42,9 @@ extern const struct device_type coresight_dev_type[]; #define ETM_MODE_EXCL_KERN BIT(30) #define ETM_MODE_EXCL_USER BIT(31) +#define ETM_MODE_EXCL_HOST BIT(32) +#define ETM_MODE_EXCL_GUEST BIT(33) + struct cs_pair_attribute { struct device_attribute attr; u32 lo_off;