From patchwork Mon Feb 14 06:57:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3A1CCC433EF for ; Mon, 14 Feb 2022 06:59:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=nD+dRXWzlQyxYQoP9DspVmZNhB/fkrTIuQE3y8m/7+0=; b=yH4hqzAlqkVq1IxmQ6jUvch993 5QWeX8UztAZB3Rsx7rrU2eQjujXwK9IpEUiFQjuFygHpfvlRLPyssW0v+WiCID2yf98UaLh/knUBh 3U1O06tTgZrIatZRkOWB6qcbGZURuclikZ91ORvhJ/3m/DGg5B2BH5dg3yLu7GBiPL/6To2vDAVtL btSyWAvn66k/H5dyoHr9cRnQmW3Q43kihPLBnxbHqwLfq/Vh0+fz42woiMqh86mzRFvqoBlSfHG2g kSXM91Z+96kAOtlsb4f5Os2FIHvsVibMRoYiMCgBAbCzizU89DS0ZWAU8vui8Tizp6bs+D/zimC8q hg6WWvWw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVJd-00DWCu-BP; Mon, 14 Feb 2022 06:58:29 +0000 Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVJY-00DWBs-QP for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 06:58:26 +0000 Received: by mail-pf1-x449.google.com with SMTP id h3-20020a628303000000b004e12f44a262so1017734pfe.21 for ; Sun, 13 Feb 2022 22:58:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=T5yz703N//8esWizVyVeAiBEbNVxJBnpEoyrN6w9wuM=; b=rSJWrsmltlCZIDqAfHFSwzyq7er2gLTFyd2J4jOsXihtmvAO3JY+gNxdYVIxVS3QZo 60Ye//a+arlZboxvrgEuT3IvmxkKT0kcMzsm5+BNLYunduGSn/bTbA9wZWyFllLVZnzl Z09mRF0N0Z6EH52fWDbarVKJ0xe6SxaOB5m8OVTTYP25Y28ytjVuzC0l6ICYsvO65aa5 AwaOtHj/KIYVIBGx5DeKuSyJ9YFBm4Ife2A/m7lSXqaLtvvnBHpevZATS/57oDq6LDVg 2RFo1+/3vkU4YLGKZyl5j1eEa7DKszTnO0kD5+83Kmjc+Vm7BnetVjn6bx+f9mzvAP6O FgYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=T5yz703N//8esWizVyVeAiBEbNVxJBnpEoyrN6w9wuM=; b=N8iif2Q3LqplRWlQ8PLQbrC3K3BF9yrVvXw5x3SIVCwKsSxbsullAeD4sEWUW6Ukjg eijIT5XXilWkil2yt6fXn8zXVvqBnsPEoDtym60qbQI+1fxdY3DQzYVLU+TFmK7HxwoE 8LWc16n3Qxj0Q6tfvHp/HTYJDz9BjNK/6aMrU3sjm7KplllhcGl2gdbyLI1vxL7ang4f y8JL9QYWL1Dkg+8eBXcHfCryAqOgLZn//kLwxrD/TFaWSKH8xYTd0liLp5xf3mU7a0RD LSzBKVvN+VUInUuBmc9/jOicIvGiUTg7qTCOxibSnFvz8inVbn0gxFcds3g8OD9lzxYm 4kqQ== X-Gm-Message-State: AOAM532ClSFC0hOGf45+0V4nt7vg3GqpkjF0cIMNcfvzLQgg3EnXrE+Z D7U5jB/+ftnJfO5iYTkrKR2/8RdL8o4= X-Google-Smtp-Source: ABdhPJyapHwwcNZT0cLzLPIpK8udqibBngwQltSWnOSmGPSRzQ13EtqmPxecTMY1chhYXpuk2EeuMxpBw5c= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a05:6a00:174d:: with SMTP id j13mr12921862pfc.58.1644821902675; Sun, 13 Feb 2022 22:58:22 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:20 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-2-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 01/27] KVM: arm64: Introduce a validation function for an ID register From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_225824_888812_7CD536CE X-CRM114-Status: GOOD ( 30.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce arm64_check_features(), which does a basic validity checking of an ID register value against the register's limit value, which is generally the host's sanitized value. This function will be used by the following patches to check if an ID register value that userspace tries to set for a guest can be supported on the host. The validation is done using arm64_ftr_bits_kvm, which is created from arm64_ftr_regs, with some entries overwritten by entries from arm64_ftr_bits_kvm_override. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/kernel/cpufeature.c | 229 ++++++++++++++++++++++++++++ 2 files changed, 230 insertions(+) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index ef6be92b1921..a9edf1ca7dcb 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -631,6 +631,7 @@ void check_local_cpu_capabilities(void); u64 read_sanitised_ftr_reg(u32 id); u64 __read_sysreg_by_encoding(u32 sys_id); +int arm64_check_features_kvm(u32 sys_reg, u64 val, u64 limit); static inline bool cpu_supports_mixed_endian_el0(void) { diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index e5f23dab1c8d..bc0ed09aa1b5 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -928,6 +928,10 @@ static void init_32bit_cpu_features(struct cpuinfo_32bit *info) init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2); } +#ifdef CONFIG_KVM +static void init_arm64_ftr_bits_kvm(void); +#endif + void __init init_cpu_features(struct cpuinfo_arm64 *info) { /* Before we start using the tables, make sure it is sorted */ @@ -970,6 +974,14 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) * after we have initialised the CPU feature infrastructure. */ setup_boot_cpu_capabilities(); + +#ifdef CONFIG_KVM + /* + * Initialize arm64_ftr_bits_kvm, which will be used to provide + * KVM with general feature checking for its guests. + */ + init_arm64_ftr_bits_kvm(); +#endif } static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new) @@ -3156,3 +3168,220 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, return sprintf(buf, "Vulnerable\n"); } } + +#ifdef CONFIG_KVM +/* + * arm64_ftr_bits_kvm[] is used for KVM to check if features that are + * indicated in an ID register value for the guest are available on the host. + * arm64_ftr_bits_kvm[] is created based on arm64_ftr_regs[]. But, for + * registers for which arm64_ftr_bits_kvm_override[] has a corresponding + * entry, replace arm64_ftr_bits entries in arm64_ftr_bits_kvm[] with the + * ones in arm64_ftr_bits_kvm_override[]. + * + * What to add to arm64_ftr_bits_kvm_override[] shouldn't be decided according + * to KVM's implementation, but according to schemes of ID register fields. + * (e.g. For ID_AA64DFR0_EL1.PMUVER, a higher value on the field indicates + * more features. So, the arm64_ftr_bits' type for the field can be + * FTR_LOWER_SAFE instead of FTR_EXACT unlike arm64_ftr_regs) + */ + +/* + * The number of __ftr_reg_bits_entry entries in arm64_ftr_bits_kvm must be + * the same as the number of __ftr_reg_entry entries in arm64_ftr_regs. + */ +static struct __ftr_reg_bits_entry { + u32 sys_id; + struct arm64_ftr_bits *ftr_bits; +} arm64_ftr_bits_kvm[ARRAY_SIZE(arm64_ftr_regs)]; + +/* + * Number of arm64_ftr_bits entries for each register. + * (Number of 4 bits fields in 64 bit register + 1 entry for ARM64_FTR_END) + */ +#define MAX_FTR_BITS_LEN 17 + +/* Use FTR_LOWER_SAFE for AA64DFR0_EL1.PMUVER and AA64DFR0_EL1.DEBUGVER. */ +static struct arm64_ftr_bits ftr_id_aa64dfr0_kvm[MAX_FTR_BITS_LEN] = { + S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_PMUVER_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6), + ARM64_FTR_END, +}; + +#define ARM64_FTR_REG_BITS(id, table) { \ + .sys_id = id, \ + .ftr_bits = &((table)[0]), \ +} + +/* + * All entries in arm64_ftr_bits_kvm_override[] are used to override + * the corresponding entries in arm64_ftr_bits_kvm[]. + */ +static struct __ftr_reg_bits_entry arm64_ftr_bits_kvm_override[] = { + ARM64_FTR_REG_BITS(SYS_ID_AA64DFR0_EL1, ftr_id_aa64dfr0_kvm), +}; + +/* + * Override entries in @orig_ftrp with the ones in @new_ftrp when their shift + * fields match. The last entry of @orig_ftrp and @new_ftrp must be + * ARM64_FTR_END (.width == 0). + */ +static void arm64_ftr_reg_bits_override(struct arm64_ftr_bits *orig_ftrp, + const struct arm64_ftr_bits *new_ftrp) +{ + const struct arm64_ftr_bits *n_ftrp; + struct arm64_ftr_bits *o_ftrp; + + for (n_ftrp = new_ftrp; n_ftrp->width; n_ftrp++) { + for (o_ftrp = orig_ftrp; o_ftrp->width; o_ftrp++) { + if (o_ftrp->shift == n_ftrp->shift) { + *o_ftrp = *n_ftrp; + break; + } + } + } +} + +/* + * Copy arm64_ftr_bits entries from @src_ftrp to @dst_ftrp. The last entries + * of @dst_ftrp and @src_ftrp must be ARM64_FTR_END (.width == 0). + */ +static void copy_arm64_ftr_bits(struct arm64_ftr_bits *dst_ftrp, + const struct arm64_ftr_bits *src_ftrp) +{ + int i = 0; + + for (; src_ftrp[i].width; i++) { + if (WARN_ON_ONCE(i >= (MAX_FTR_BITS_LEN - 1))) + break; + + dst_ftrp[i] = src_ftrp[i]; + } + + dst_ftrp[i].width = 0; +} + +/* + * Initialize arm64_ftr_bits_kvm. Copy arm64_ftr_bits for each ID register + * from arm64_ftr_regs to arm64_ftr_bits_kvm, and then override entries in + * arm64_ftr_bits_kvm with ones in arm64_ftr_bits_kvm_override. + */ +static void init_arm64_ftr_bits_kvm(void) +{ + struct arm64_ftr_bits ftr_temp[MAX_FTR_BITS_LEN]; + static struct __ftr_reg_bits_entry *bits, *o_bits; + int i, j; + + /* Copy entries from arm64_ftr_regs to arm64_ftr_bits_kvm */ + for (i = 0; i < ARRAY_SIZE(arm64_ftr_bits_kvm); i++) { + bits = &arm64_ftr_bits_kvm[i]; + bits->sys_id = arm64_ftr_regs[i].sys_id; + bits->ftr_bits = (struct arm64_ftr_bits *)arm64_ftr_regs[i].reg->ftr_bits; + }; + + /* + * Override the entries in arm64_ftr_bits_kvm with the ones in + * arm64_ftr_bits_kvm_override. + */ + for (i = 0; i < ARRAY_SIZE(arm64_ftr_bits_kvm_override); i++) { + o_bits = &arm64_ftr_bits_kvm_override[i]; + for (j = 0; j < ARRAY_SIZE(arm64_ftr_bits_kvm); j++) { + bits = &arm64_ftr_bits_kvm[j]; + if (bits->sys_id != o_bits->sys_id) + continue; + + /* + * The code below tries to sustain the ordering of + * entries in bits even in o_bits, just in case + * arm64_ftr_bits entries in arm64_ftr_regs have + * any ordering requirements in the future (so that + * the ones in arm64_ftr_bits_kvm_override doesn't + * have to care). + * So, rather than directly copying them to empty + * slots in o_bits, the code simply copies entries + * from bits to o_bits first, then overrides them with + * original entries in o_bits. + */ + memset(ftr_temp, 0, sizeof(ftr_temp)); + + /* + * Temporary save all entries in o_bits->ftr_bits + * to ftr_temp. + */ + copy_arm64_ftr_bits(ftr_temp, o_bits->ftr_bits); + + /* + * Copy entries from bits->ftr_bits to o_bits->ftr_bits. + */ + copy_arm64_ftr_bits(o_bits->ftr_bits, bits->ftr_bits); + + /* + * Override entries in o_bits->ftr_bits with the + * saved ones, and update bits->ftr_bits with + * o_bits->ftr_bits. + */ + arm64_ftr_reg_bits_override(o_bits->ftr_bits, ftr_temp); + bits->ftr_bits = o_bits->ftr_bits; + break; + } + } +} + +static int search_cmp_ftr_reg_bits(const void *id, const void *regp) +{ + return ((int)(unsigned long)id - + (int)((const struct __ftr_reg_bits_entry *)regp)->sys_id); +} + +static const struct arm64_ftr_bits *get_arm64_ftr_bits_kvm(u32 sys_id) +{ + const struct __ftr_reg_bits_entry *ret; + + ret = bsearch((const void *)(unsigned long)sys_id, + arm64_ftr_bits_kvm, + ARRAY_SIZE(arm64_ftr_bits_kvm), + sizeof(arm64_ftr_bits_kvm[0]), + search_cmp_ftr_reg_bits); + if (ret) + return ret->ftr_bits; + + return NULL; +} + +/* + * Check if features (or levels of features) that are indicated in the ID + * register value @val are also indicated in @limit. + * This function is for KVM to check if features that are indicated in @val, + * which will be used as the ID register value for its guest, are supported + * on the host. + * For AA64MMFR0_EL1.TGranX_2 fields, which don't follow the standard ID + * scheme, the function checks if values of the fields in @val are the same + * as the ones in @limit. + */ +int arm64_check_features_kvm(u32 sys_reg, u64 val, u64 limit) +{ + const struct arm64_ftr_bits *ftrp = get_arm64_ftr_bits_kvm(sys_reg); + u64 exposed_mask = 0; + + if (!ftrp) + return -ENOENT; + + for (; ftrp->width; ftrp++) { + s64 ftr_val = arm64_ftr_value(ftrp, val); + s64 ftr_lim = arm64_ftr_value(ftrp, limit); + + exposed_mask |= arm64_ftr_mask(ftrp); + + if (ftr_val == ftr_lim) + continue; + + if (ftr_val != arm64_ftr_safe_value(ftrp, ftr_val, ftr_lim)) + return -E2BIG; + } + + /* Make sure that no unrecognized fields are set in @val. */ + if (val & ~exposed_mask) + return -E2BIG; + + return 0; +} +#endif /* CONFIG_KVM */ From patchwork Mon Feb 14 06:57:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745004 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9A45BC433F5 for ; Mon, 14 Feb 2022 07:00:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zBLPK3/5xfF57b5sBkGE/14ISio+YZ31Nc/Uck3MSt4=; b=viDpC6Z/St5q7k4YG1O0YlnEw2 nYtNm5DJWcnwzvUplnCsRMNi2Mfu0+DrHbT0C6vHvnDyOYaRRb5L5zPgn7i9o1+VXErWDmOHGmvLj iDKbxeBXU6ul1otrSu6nbM8yuu2yhJEetbq8QBDJYM4SQmg5UU/Bc8uzS21kzo14nCxJr+d3wCueD FgcZR6NWyu5FdpWnF6ZkWaXQvR2Xl4ZaaHp3mIH86VSZ9g8Kfu0iB3qErV4oCIvxauV5g1SbiTX2S okm+yQVe6VP8dihccpY2s3h8lcEyEx19bCTrXymXL8ZCe9MBXldeBvDY/xEsQOXywoqhrvSNkQMtJ Yg9sY5Ig==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVK7-00DWJv-7j; Mon, 14 Feb 2022 06:58:59 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVK2-00DWHP-Oc for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 06:58:56 +0000 Received: by mail-pj1-x104a.google.com with SMTP id f6-20020a17090a654600b001b9e4758439so561697pjs.1 for ; Sun, 13 Feb 2022 22:58:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Vgj7XyLBFPYAYQwkppKUlj1n4wNsE6rj/FNpKMfCu98=; b=X8PhvwO70NUYonvD0HTdnUHU45JRDfKSdCmvbdLYMMheOGPxsobxzpmQhhn1JV5yR1 3ey9KZIk/4L3L4EIsMLnaV1y6pdpBc87/NJt/Pdx/rGnnp+vCVhu8njDIUOjfRiZ61wC DU7vJHp1bEL8nxohyWy6w6VKOo7X5As1m6Yr5cm53gGSuSJCTCSViTuUiFuF/4l0nOJS tQav1nnI8GPYYBjy2wyHZ/jA2bgjk1hc61vkmCLYE1L1iiNrnvL2zdv5H0UmUqUd7RWP e94RxB4l0pG3h13FawkIHYO4LvZeQphnc44AIoBH76BKpnj688RL7Hqy4ZWssIP+eSPr 1kcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Vgj7XyLBFPYAYQwkppKUlj1n4wNsE6rj/FNpKMfCu98=; b=h2P0j/fzdO/IIT3kJ9frHOk9jcn9vpJrm1MZrfI8f7OzQOHKlaUzw5xZfobwQjEB7T aO5XoXwZjBqeBBGqHOyfw+Y5cTNRDoSLpo8/r/iADy0CL3iklRX+ixrFxIwGXEZHPRL1 pvcABWrd9jEpJm+9JuFmwS6xXBaLxtAbtQEAXEfqs+O4oq/zPjr1Sp7m1E+H5yC0DlpI uYCTtBYsWfwnLcv3yn7CzdhgA8/YY6V5ilwDnFhAP38IliieKiM1kZiYQgc3guzTObUk mOz9uWDI8aYQYpuePA9ttkSk9FuwHU8yC+hrfPQ5nYSV/Mo6BEIxyZ9k9Zoq+CdNMve9 lXIQ== X-Gm-Message-State: AOAM531FbGLM+6ciNO0Py06nW2GQDoX0ayNdL5AKXKTb18EZJM67m7C8 x+/g5A6vqTJC+MtdXSEflXzxccRRLKc= X-Google-Smtp-Source: ABdhPJy8Hzl4YbaIt8QMfE4WTT3z8QzyQIgGU0BcTS5sA4D2u0uCKZBxRlvVDTAvuEN6TnANhGtjHvLk0fs= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a62:e508:: with SMTP id n8mr12935031pff.83.1644821930480; Sun, 13 Feb 2022 22:58:50 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:21 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-3-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 02/27] KVM: arm64: Save ID registers' sanitized value per guest From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_225854_861154_876E3F1D X-CRM114-Status: GOOD ( 21.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce id_regs[] in kvm_arch as a storage of guest's ID registers, and save ID registers' sanitized value in the array at KVM_CREATE_VM. Use the saved ones when ID registers are read by the guest or userspace (via KVM_GET_ONE_REG). Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_host.h | 12 ++++++ arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/sys_regs.c | 65 ++++++++++++++++++++++++------- 3 files changed, 63 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 2869259e10c0..c041e5afe3d2 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -101,6 +101,13 @@ struct kvm_s2_mmu { struct kvm_arch_memory_slot { }; +/* + * (Op0, Op1, CRn, CRm, Op2) of ID registers is (3, 0, 0, crm, op2), + * where 0<=crm<8, 0<=op2<8. + */ +#define KVM_ARM_ID_REG_MAX_NUM 64 +#define IDREG_IDX(id) ((sys_reg_CRm(id) << 3) | sys_reg_Op2(id)) + struct kvm_arch { struct kvm_s2_mmu mmu; @@ -137,6 +144,9 @@ struct kvm_arch { /* Memory Tagging Extension enabled for the guest */ bool mte_enabled; bool ran_once; + + /* ID registers for the guest. */ + u64 id_regs[KVM_ARM_ID_REG_MAX_NUM]; }; struct kvm_vcpu_fault_info { @@ -736,6 +746,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, struct kvm_arm_copy_mte_tags *copy_tags); +void set_default_id_regs(struct kvm *kvm); + /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 4783dbf66df2..91110d996ed6 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -156,6 +156,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) kvm->arch.max_vcpus = kvm_arm_default_max_vcpus(); set_default_spectre(kvm); + set_default_id_regs(kvm); return ret; out_free_stage2_pgd: diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 4dc2fba316ff..080908c60fa6 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -33,6 +33,8 @@ #include "trace.h" +static u64 __read_id_reg(const struct kvm_vcpu *vcpu, u32 id); + /* * All of this file is extremely similar to the ARM coproc.c, but the * types are different. My gut feeling is that it should be pretty @@ -273,7 +275,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); + u64 val = __read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); u32 sr = reg_to_encoding(r); if (!(val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))) { @@ -1059,17 +1061,16 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, return true; } -/* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(const struct kvm_vcpu *vcpu, - struct sys_reg_desc const *r, bool raz) +static bool is_id_reg(u32 id) { - u32 id = reg_to_encoding(r); - u64 val; - - if (raz) - return 0; + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 0 && + sys_reg_CRm(id) < 8); +} - val = read_sanitised_ftr_reg(id); +static u64 __read_id_reg(const struct kvm_vcpu *vcpu, u32 id) +{ + u64 val = vcpu->kvm->arch.id_regs[IDREG_IDX(id)]; switch (id) { case SYS_ID_AA64PFR0_EL1: @@ -1119,6 +1120,14 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu, return val; } +static u64 read_id_reg(const struct kvm_vcpu *vcpu, + struct sys_reg_desc const *r, bool raz) +{ + u32 id = reg_to_encoding(r); + + return raz ? 0 : __read_id_reg(vcpu, id); +} + static unsigned int id_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { @@ -1223,9 +1232,8 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, /* * cpufeature ID register user accessors * - * For now, these registers are immutable for userspace, so no values - * are stored, and for set_id_reg() we don't allow the effective value - * to be changed. + * For now, these registers are immutable for userspace, so for set_id_reg() + * we don't allow the effective value to be changed. */ static int __get_id_reg(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, void __user *uaddr, @@ -1837,8 +1845,8 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu, if (p->is_write) { return ignore_write(vcpu, p); } else { - u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); - u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); + u64 dfr = __read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1); + u64 pfr = __read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1); u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL3_SHIFT); p->regval = ((((dfr >> ID_AA64DFR0_WRPS_SHIFT) & 0xf) << 28) | @@ -2850,3 +2858,30 @@ void kvm_sys_reg_table_init(void) /* Clear all higher bits. */ cache_levels &= (1 << (i*3))-1; } + +/* + * Set the guest's ID registers that are defined in sys_reg_descs[] + * with ID_SANITISED() to the host's sanitized value. + */ +void set_default_id_regs(struct kvm *kvm) +{ + int i; + u32 id; + const struct sys_reg_desc *rd; + u64 val; + + for (i = 0; i < ARRAY_SIZE(sys_reg_descs); i++) { + rd = &sys_reg_descs[i]; + if (rd->access != access_id_reg) + /* Not ID register, or hidden/reserved ID register */ + continue; + + id = reg_to_encoding(rd); + if (WARN_ON_ONCE(!is_id_reg(id))) + /* Shouldn't happen */ + continue; + + val = read_sanitised_ftr_reg(id); + kvm->arch.id_regs[IDREG_IDX(id)] = val; + } +} From patchwork Mon Feb 14 06:57:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 311E6C433FE for ; Mon, 14 Feb 2022 07:00:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=V0Cvp/0teCd8m281VOrsT07DgOnRM2GsF2Pp7JNMERc=; b=vKYx91O0kkET9eUtdR9j3VETwR CtIck3f2RZX8nuP/3e/RG78ztkcZaMYj4tJyfhjsu0MSlfiuTjiGYVWheA/SDD8xqn2HGI3BpRc6j lFwaEd8zugKACa0TEaKo6gNanShF0Y/0VuNEyIns0rEuSn6KTjj45svT0nN5V3gdzbd88AR9LG+Rd tgqaqD3CJFPgO487eOy3yvBznu8SBamYzwzNJBf8MYKvhAaGvhobJPMhLaBmka3egcPrOd0GIlIEN H5x+MmvZkMJnzN23ExOe2xOb5y7oA0BuOcJz32FopD5iBW8pOBZpKo1ypIaz6pGTCyVLb90uUZZ4f 4Cdc8ILg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVKJ-00DWO7-Ja; Mon, 14 Feb 2022 06:59:11 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVK4-00DWIl-9U for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 06:58:58 +0000 Received: by mail-pj1-x104a.google.com with SMTP id g14-20020a17090a7d0e00b001b8ef9f9545so13320362pjl.0 for ; Sun, 13 Feb 2022 22:58:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9dgY9RPK0Mzg9PSBfYq2VpOzkE0KrSO3MIogj57mPgI=; b=oeGQWsjqzeH8TXqpnL2Y0KU91kw6toSZK7TJli4iBWYn327KuZPpxx8ZW+KQ1dYvb9 6cIffwiWM0PFrxs8pnXIr/9AKScvH3wtVMLwcBYbp3ArkF/tizIuxGBEN63UKLoPi9HL 7BfZIJQYyeuhqF6CfNdlbebR5PmEngvU1Lc1TuNfoavKFkAMVpmxcldoadVKUshC/8J5 gSvhk6cLjOahX397Mc5RzQquGw1Yx41ngDT8hoKYstFacOZCoiU8Tr3P6adxMaXcyUOm OG2V8ieEdVDeXTCzybZZXlrGw3BmFPWMxk5t60T1SUgoeApZ9q7DA1rKzILeu0veosK2 9sKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9dgY9RPK0Mzg9PSBfYq2VpOzkE0KrSO3MIogj57mPgI=; b=p81PjeZjLo8zpD8Nb1UItOu3qwyAu70JKQtJ5jtWEVn1oRcFcNGV5hBojnkI2b06su VrOOigKFmIB02PIVG6qItS63rap+CmDoO9sfrRDeytiqqOUorh3r+mVr0lnWotYbjLbJ INvoouegRYj0NqAybaeAB9y1lFCV5qwkb+QnL9mj4dDUGP06xRkJTUIp7I6VqQ1m2jWH 4KOX5aAeLcIgfmIGnxOgdCa840w0Q7vpKX2h7ij0eyBQYnFg3iybJ2dhCXCqYLBQqEyY qVU/maH2S44Nl4f2HQwLZyTT20MBjeZAc2WLR9z18SCJBofe8CmK6+jtJ+zgHZdOdn3H QrMw== X-Gm-Message-State: AOAM533Q8Ba/SY0WAmjRtdPeKv69HXfyp1knjKDncqjK/MBSwcoqGaFk Vy7g/EsNYajArLXWzdk0nEJc7E6I8D4= X-Google-Smtp-Source: ABdhPJysO+rdu0V/2nbjjc7ftxykFod94OtpUlWD1z6SarUJFuLqaUo4t5iPz+Md7TD9fcEN3Tv9OJtECso= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:aa7:888f:: with SMTP id z15mr2584612pfe.15.1644821935315; Sun, 13 Feb 2022 22:58:55 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:22 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-4-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 03/27] KVM: arm64: Introduce struct id_reg_info From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_225856_369656_7BF558E2 X-CRM114-Status: GOOD ( 40.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch lays the groundwork to make ID registers writable. Introduce struct id_reg_info for an ID register to manage the register specific control of its value for the guest, and provide set of functions commonly used for ID registers to make them writable. The id_reg_info is used to do register specific initialization, validation of the ID register and etc. Not all ID registers must have the id_reg_info. ID registers that don't have the id_reg_info are handled in a common way that is applied to all ID registers. At present, changing an ID register from userspace is allowed only if the ID register has the id_reg_info, but that will be changed by the following patches. No ID register has the structure yet and the following patches will add the id_reg_info for some ID registers. kvm_set_id_reg_feature(), which is introduced in this patch, is going to be used by the following patch outsdie from sys_regs.c when an ID register field needs to be updated. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/kvm/sys_regs.c | 280 ++++++++++++++++++++++++++++-- 3 files changed, 268 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c041e5afe3d2..9ffe6604a58a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -747,6 +747,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, struct kvm_arm_copy_mte_tags *copy_tags); void set_default_id_regs(struct kvm *kvm); +int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval); /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 898bee0004ae..3f12e7036985 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -1214,6 +1214,7 @@ #define ICH_VTR_TDS_MASK (1 << ICH_VTR_TDS_SHIFT) #define ARM64_FEATURE_FIELD_BITS 4 +#define ARM64_FEATURE_FIELD_MASK ((1ull << ARM64_FEATURE_FIELD_BITS) - 1) /* Create a mask for the feature bits of the specified feature. */ #define ARM64_FEATURE_MASK(x) (GENMASK_ULL(x##_SHIFT + ARM64_FEATURE_FIELD_BITS - 1, x##_SHIFT)) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 080908c60fa6..da76516f2aad 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -265,6 +265,113 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu, return read_zero(vcpu, p); } +struct id_reg_info { + /* Register ID */ + u32 sys_reg; + + /* + * Limit value of the register for a vcpu. The value is the sanitized + * system value with bits set/cleared for unsupported features for the + * guest. + */ + u64 vcpu_limit_val; + + /* Fields that are not validated by arm64_check_features_kvm. */ + u64 ignore_mask; + + /* An optional initialization function of the id_reg_info */ + void (*init)(struct id_reg_info *id_reg); + + /* + * This is an optional ID register specific validation function. When + * userspace tries to set the ID register, arm64_check_features_kvm() + * will check if the requested value indicates any features that cannot + * be supported by KVM on the host. But, some ID register fields need + * a special checking, and this function can be used for such fields. + * e.g. When SVE is configured for a vCPU by KVM_ARM_VCPU_INIT, + * ID_AA64PFR0_EL1.SVE shouldn't be set to 0 for the vCPU. + * The validation function for ID_AA64PFR0_EL1 could be used to check + * the field is consistent with SVE configuration. + */ + int (*validate)(struct kvm_vcpu *vcpu, const struct id_reg_info *id_reg, + u64 val); + + /* + * Return a bitmask of the vCPU's ID register fields that are not + * synced with saved (per VM) ID register value, which usually + * indicates opt-in CPU features that are not configured for the vCPU. + * ID registers are saved per VM, but some opt-in CPU features can + * be configured per vCPU. The saved (per VM) values for such + * features are for vCPUs with the features (and zero for + * vCPUs without the features). + * Return value of this function is used to handle such fields + * for per vCPU ID register read/write request with saved per VM + * ID register. See the __write_id_reg's comment for more detail. + */ + u64 (*vcpu_mask)(const struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg); +}; + +static void id_reg_info_init(struct id_reg_info *id_reg) +{ + u64 val = read_sanitised_ftr_reg(id_reg->sys_reg); + + id_reg->vcpu_limit_val = val; + if (id_reg->init) + id_reg->init(id_reg); + + /* + * id_reg->init() might update id_reg->vcpu_limit_val. + * Make sure that id_reg->vcpu_limit_val, which will be the default + * register value for guests, is a safe value to use for guests + * on the host. + */ + WARN_ON_ONCE(arm64_check_features_kvm(id_reg->sys_reg, + id_reg->vcpu_limit_val, val)); +} + +/* + * An ID register that needs special handling to control the value for the + * guest must have its own id_reg_info in id_reg_info_table. + * (i.e. the reset value is different from the host's sanitized value, + * the value is affected by opt-in features, some fields need specific + * validation, etc.) + */ +#define GET_ID_REG_INFO(id) (id_reg_info_table[IDREG_IDX(id)]) +static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = {}; + +static int validate_id_reg(struct kvm_vcpu *vcpu, u32 id, u64 val) +{ + const struct id_reg_info *id_reg = GET_ID_REG_INFO(id); + u64 limit, tmp_val; + int err; + + if (id_reg) { + limit = id_reg->vcpu_limit_val; + /* + * Replace the fields that are indicated in ignore_mask with + * the value in the limit to not have arm64_check_features_kvm() + * check the field in @val. + */ + tmp_val = val & ~id_reg->ignore_mask; + tmp_val |= (limit & id_reg->ignore_mask); + } else { + limit = read_sanitised_ftr_reg(id); + tmp_val = val; + } + + /* Check if the value indicates any feature that is not in the limit. */ + err = arm64_check_features_kvm(id, tmp_val, limit); + if (err) + return err; + + if (id_reg && id_reg->validate) + /* Run the ID register specific validity check. */ + err = id_reg->validate(vcpu, id_reg, val); + + return err; +} + /* * ARMv8.1 mandates at least a trivial LORegion implementation, where all the * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0 @@ -1068,9 +1175,91 @@ static bool is_id_reg(u32 id) sys_reg_CRm(id) < 8); } +static u64 read_kvm_id_reg(struct kvm *kvm, u32 id) +{ + return kvm->arch.id_regs[IDREG_IDX(id)]; +} + +static int __modify_kvm_id_reg(struct kvm *kvm, u32 id, u64 val, + u64 preserve_mask) +{ + u64 old, new; + + lockdep_assert_held(&kvm->lock); + + old = kvm->arch.id_regs[IDREG_IDX(id)]; + + /* Preserve the value at the bit position set in preserve_mask */ + new = old & preserve_mask; + new |= (val & ~preserve_mask); + + /* Don't allow to modify ID register value after KVM_RUN on any vCPUs */ + if (kvm->arch.ran_once && new != old) + return -EBUSY; + + WRITE_ONCE(kvm->arch.id_regs[IDREG_IDX(id)], new); + + return 0; +} + +static int modify_kvm_id_reg(struct kvm *kvm, u32 id, u64 val, + u64 preserve_mask) +{ + int ret; + + mutex_lock(&kvm->lock); + ret = __modify_kvm_id_reg(kvm, id, val, preserve_mask); + mutex_unlock(&kvm->lock); + + return ret; +} + +static int write_kvm_id_reg(struct kvm *kvm, u32 id, u64 val) +{ + return modify_kvm_id_reg(kvm, id, val, 0); +} + +/* + * KVM basically forces all vCPUs of the guest to have a uniform value for + * each ID register (it means KVM_SET_ONE_REG for a vCPU affects all + * the vCPUs of the guest), and the id_regs[] of kvm_arch holds values + * of ID registers for the guest. However, there is an exception for + * ID register fields corresponding to CPU features that can be + * configured per vCPU by KVM_ARM_VCPU_INIT, or etc (e.g. PMUv3, SVE, etc). + * For such fields, all vCPUs that have the feature will have a non-zero + * uniform value, which can be updated by userspace, but the vCPUs that + * don't have the feature will have zero for the fields. + * Values that @id_regs holds are for vCPUs that have such features. So, + * to get the ID register value for a vCPU that doesn't have those features, + * the corresponding fields in id_regs[] needs to be cleared. + * A bitmask of the fields are provided by id_reg_info's vcpu_mask(), and + * __write_id_reg() and __read_id_reg() take care of those fields using + * the bitmask. + */ +static int __write_id_reg(struct kvm_vcpu *vcpu, u32 id, u64 val) +{ + const struct id_reg_info *id_reg = GET_ID_REG_INFO(id); + u64 mask = 0; + + if (id_reg && id_reg->vcpu_mask) + mask = id_reg->vcpu_mask(vcpu, id_reg); + + /* + * Update the ID register for the guest with @val, except for fields + * that are set in the mask, which indicates fields for opt-in + * features that are not configured for the vCPU. + */ + return modify_kvm_id_reg(vcpu->kvm, id, val, mask); +} + static u64 __read_id_reg(const struct kvm_vcpu *vcpu, u32 id) { - u64 val = vcpu->kvm->arch.id_regs[IDREG_IDX(id)]; + const struct id_reg_info *id_reg = GET_ID_REG_INFO(id); + u64 val = read_kvm_id_reg(vcpu->kvm, id); + + if (id_reg && id_reg->vcpu_mask) + /* Clear fields for opt-in features that are not configured. */ + val &= ~(id_reg->vcpu_mask(vcpu, id_reg)); switch (id) { case SYS_ID_AA64PFR0_EL1: @@ -1229,12 +1418,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, return 0; } -/* - * cpufeature ID register user accessors - * - * For now, these registers are immutable for userspace, so for set_id_reg() - * we don't allow the effective value to be changed. - */ +/* cpufeature ID register user accessors */ static int __get_id_reg(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, void __user *uaddr, bool raz) @@ -1245,11 +1429,31 @@ static int __get_id_reg(const struct kvm_vcpu *vcpu, return reg_to_user(uaddr, &val, id); } -static int __set_id_reg(const struct kvm_vcpu *vcpu, +/* + * Check if the given id indicates AArch32 ID register encoding. + */ +static bool is_aarch32_id_reg(u32 id) +{ + u32 crm, op2; + + if (!is_id_reg(id)) + return false; + + crm = sys_reg_CRm(id); + op2 = sys_reg_Op2(id); + if (crm == 1 || crm == 2 || (crm == 3 && (op2 != 3 && op2 != 7))) + /* AArch32 ID register */ + return true; + + return false; +} + +static int __set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, void __user *uaddr, bool raz) { const u64 id = sys_reg_to_index(rd); + u32 encoding = reg_to_encoding(rd); int err; u64 val; @@ -1257,11 +1461,28 @@ static int __set_id_reg(const struct kvm_vcpu *vcpu, if (err) return err; - /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(vcpu, rd, raz)) + if (val == read_id_reg(vcpu, rd, raz)) + /* The value is same as the current value. Nothing to do. */ + return 0; + + /* + * Don't allow to modify the register's value if the register is raz, + * or the reg doesn't have the id_reg_info. + */ + if (raz || !GET_ID_REG_INFO(encoding)) return -EINVAL; - return 0; + /* + * Skip the validation of AArch32 ID registers if the system doesn't + * 32bit EL0 (their value are UNKNOWN). + */ + if (system_supports_32bit_el0() || !is_aarch32_id_reg(encoding)) { + err = validate_id_reg(vcpu, encoding, val); + if (err) + return err; + } + + return __write_id_reg(vcpu, encoding, val); } static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, @@ -2823,6 +3044,20 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) return write_demux_regids(uindices); } +static void id_reg_info_init_all(void) +{ + int i; + struct id_reg_info *id_reg; + + for (i = 0; i < ARRAY_SIZE(id_reg_info_table); i++) { + id_reg = (struct id_reg_info *)id_reg_info_table[i]; + if (!id_reg) + continue; + + id_reg_info_init(id_reg); + } +} + void kvm_sys_reg_table_init(void) { unsigned int i; @@ -2857,6 +3092,8 @@ void kvm_sys_reg_table_init(void) break; /* Clear all higher bits. */ cache_levels &= (1 << (i*3))-1; + + id_reg_info_init_all(); } /* @@ -2869,11 +3106,12 @@ void set_default_id_regs(struct kvm *kvm) u32 id; const struct sys_reg_desc *rd; u64 val; + struct id_reg_info *idr; for (i = 0; i < ARRAY_SIZE(sys_reg_descs); i++) { rd = &sys_reg_descs[i]; if (rd->access != access_id_reg) - /* Not ID register, or hidden/reserved ID register */ + /* Not ID register or hidden/reserved ID register */ continue; id = reg_to_encoding(rd); @@ -2881,7 +3119,21 @@ void set_default_id_regs(struct kvm *kvm) /* Shouldn't happen */ continue; - val = read_sanitised_ftr_reg(id); - kvm->arch.id_regs[IDREG_IDX(id)] = val; + idr = GET_ID_REG_INFO(id); + val = idr ? idr->vcpu_limit_val : read_sanitised_ftr_reg(id); + WARN_ON_ONCE(write_kvm_id_reg(kvm, id, val)); } } + +/* + * Update the ID register's field with @fval for the guest. + * The caller is expected to hold the kvm->lock. + * This will not fail unless any vCPUs in the guest have started. + */ +int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval) +{ + u64 val = ((u64)fval & ARM64_FEATURE_FIELD_MASK) << field_shift; + u64 preserve_mask = ~(ARM64_FEATURE_FIELD_MASK << field_shift); + + return __modify_kvm_id_reg(kvm, id, val, preserve_mask); +} From patchwork Mon Feb 14 06:57:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745006 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9DA99C4332F for ; Mon, 14 Feb 2022 07:00:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=HQjZjkEZREgHIsWHqwhGb91m2/z7v17071F9gmtwl5Y=; b=GgkdVKR+n6T6ygc+aqfIfcBcY9 7kjaBhi7qj5Z8Jyna5sO4hS1U0kegBDKMONA/P67YiPY2m4bJ15CDWjRRqLg6zTLis2HNtfdZEJ1k 4GK1yjZGPUYANK9DoHpPwNu4TIckgm1tK76tst6TqVvNjeW/oP//sdPe5yeD99djM0hON+qUB/WIr 66Lv6ON1Pg7+QV/p4hy4XMQx+xF9P3m7kGYPeBXVpEaM/QIZ1l2qhh6Y8dA1rTX+FzrsTFuHm6zFu /vYhzzQNHKpNajqvBaTgkdJuXtsVxNuuxNLkGiFilkAkl1MWrQ3Xla9bEhVIXJ+ivM/D+yOIS+mmU W0fxFjTA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVKY-00DWUn-8F; Mon, 14 Feb 2022 06:59:26 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVKI-00DWNY-Fs for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 06:59:12 +0000 Received: by mail-pl1-x64a.google.com with SMTP id p16-20020a170902a41000b0014992c5d56bso5757950plq.19 for ; Sun, 13 Feb 2022 22:59:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JUvdV703fUmfdShCFNAQf7aqj0feD90gCaM4pqTWeMQ=; b=AtexfzHv2vR9zcQ70yNlldV96WVg4qjhUTjT/wM+KMso6UOSpbOMAjZUD5mZKYox4/ wkO1MuQcCKnuVCXK7Akw3yDKirDc65s6GqOYJ48YI04tL42gCBb5fviDKFnanzamyOWR 9+p66ZWS5LNMuijwkLmxLVZ5+TJqfbxZn6rmzVc1h9Wn4LAL7sswb+AU2h4n5208GI7C swkrZcF8hQkBb2LDAqFsiqbai7ILrsLsjZORhEk3nNs4ztdUQ07oqjUH9tfW+cY5Q/Hm 8kvFGmzJJV02eOZtdT2+OhEmvfuLK6QIF4hacH7zG27WOPknYueiq5sePt+BMbRbIF7f EPTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JUvdV703fUmfdShCFNAQf7aqj0feD90gCaM4pqTWeMQ=; b=KEFS8gI9+i7kh4trX9TOmTskZx43V5q3JdCWD6ltGor9RCDgmJ50GFt+FOVRTpk1oN BaH+mYIkuSFaIYXoWAIQ6OfaAfATW2LoN9uVllxxKnbHVE0mBTUcoC+kfNLY/aH8pT73 lEDqpimTB5/Ms6KT9HYhLdqoI7vmBHcqF8kLP+zmv/GabmkSjyvy2NUAHvN4dFrL5H9h W4kqbnQXs2JVZB5DHF2WZhnsxX/l5+lzTxNoqJdhNrCxDW9A3cUSEB7ghQeJQZDRBgqD 0mlLu9z8DH+ttxV0I08ZchCO7KHQvQmrl1Kba0U5bsNGSdhIYMQJCESvpRozCl/ieJlG /qaA== X-Gm-Message-State: AOAM533NeN3iBa0+0fa3kdng4eRIdHR0LpB+WSKWLBdQwiVVasZnV6HD Gy23oCOr1U3ZRwp3kdykhL44xeE1jug= X-Google-Smtp-Source: ABdhPJyaOpD+2h6EQEPmbCB7T8o9YXhbq3ftyM68EI3CXXoiFBYdusPkULI1OSMibLYdCQMhdod75CutK8k= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a05:6a00:228e:: with SMTP id f14mr13331596pfe.33.1644821948303; Sun, 13 Feb 2022 22:59:08 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:23 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-5-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 04/27] KVM: arm64: Make ID_AA64PFR0_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_225910_573361_AAFF7764 X-CRM114-Status: GOOD ( 25.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds id_reg_info for ID_AA64PFR0_EL1 to make it writable by userspace. Return an error if userspace tries to set SVE/GIC field of the register to a value that conflicts with SVE/GIC configuration for the guest. SIMD/FP/SVE fields of the requested value are validated according to Arm ARM. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/kvm/sys_regs.c | 150 +++++++++++++++++++------------- arch/arm64/kvm/vgic/vgic-init.c | 9 ++ 3 files changed, 101 insertions(+), 59 deletions(-) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 3f12e7036985..bfdf32ff5985 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -813,6 +813,7 @@ #define ID_AA64PFR0_ASIMD_SUPPORTED 0x0 #define ID_AA64PFR0_ELx_64BIT_ONLY 0x1 #define ID_AA64PFR0_ELx_32BIT_64BIT 0x2 +#define ID_AA64PFR0_GIC3 0x1 /* id_aa64pfr1 */ #define ID_AA64PFR1_MPAMFRAC_SHIFT 16 diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index da76516f2aad..14df7c112e57 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -330,6 +330,93 @@ static void id_reg_info_init(struct id_reg_info *id_reg) id_reg->vcpu_limit_val, val)); } +static int validate_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + int fp, simd; + unsigned int gic; + bool vcpu_has_sve = vcpu_has_sve(vcpu); + bool pfr0_has_sve = id_aa64pfr0_sve(val); + + simd = cpuid_feature_extract_signed_field(val, ID_AA64PFR0_ASIMD_SHIFT); + fp = cpuid_feature_extract_signed_field(val, ID_AA64PFR0_FP_SHIFT); + /* AdvSIMD field must have the same value as FP field */ + if (simd != fp) + return -EINVAL; + + /* fp must be supported when sve is supported */ + if (pfr0_has_sve && (fp < 0)) + return -EINVAL; + + /* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */ + if (vcpu_has_sve ^ pfr0_has_sve) + return -EPERM; + + if ((irqchip_in_kernel(vcpu->kvm) && + vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3)) { + gic = cpuid_feature_extract_unsigned_field(val, + ID_AA64PFR0_GIC_SHIFT); + if (gic == 0) + return -EPERM; + + if (gic > ID_AA64PFR0_GIC3) + return -E2BIG; + } else { + u64 mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC); + int r = arm64_check_features_kvm(id_reg->sys_reg, val & mask, + id_reg->vcpu_limit_val & mask); + if (r) + return r; + } + + return 0; +} + +static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) +{ + u64 limit = id_reg->vcpu_limit_val; + unsigned int gic; + + limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_AMU); + if (!system_supports_sve()) + limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_SVE); + + /* + * The default is to expose CSV2 == 1 and CSV3 == 1 if the HW + * isn't affected. Userspace can override this as long as it + * doesn't promise the impossible. + */ + limit &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2) | + ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3)); + + if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED) + limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), 1); + if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED) + limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), 1); + + gic = cpuid_feature_extract_unsigned_field(limit, ID_AA64PFR0_GIC_SHIFT); + if (gic > 1) { + /* Limit to GICv3.0/4.0 */ + limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_GIC); + limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), ID_AA64PFR0_GIC3); + } + id_reg->vcpu_limit_val = limit; +} + +static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, + const struct id_reg_info *idr) +{ + return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE); +} + +static struct id_reg_info id_aa64pfr0_el1_info = { + .sys_reg = SYS_ID_AA64PFR0_EL1, + .ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), + .init = init_id_aa64pfr0_el1_info, + .validate = validate_id_aa64pfr0_el1, + .vcpu_mask = vcpu_mask_id_aa64pfr0_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -338,7 +425,9 @@ static void id_reg_info_init(struct id_reg_info *id_reg) * validation, etc.) */ #define GET_ID_REG_INFO(id) (id_reg_info_table[IDREG_IDX(id)]) -static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = {}; +static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { + [IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = &id_aa64pfr0_el1_info, +}; static int validate_id_reg(struct kvm_vcpu *vcpu, u32 id, u64 val) { @@ -1262,20 +1351,6 @@ static u64 __read_id_reg(const struct kvm_vcpu *vcpu, u32 id) val &= ~(id_reg->vcpu_mask(vcpu, id_reg)); switch (id) { - case SYS_ID_AA64PFR0_EL1: - if (!vcpu_has_sve(vcpu)) - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_SVE); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_AMU); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2); - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3); - if (irqchip_in_kernel(vcpu->kvm) && - vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) { - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_GIC); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), 1); - } - break; case SYS_ID_AA64PFR1_EL1: if (!kvm_has_mte(vcpu->kvm)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE); @@ -1376,48 +1451,6 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; } -static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, - const struct sys_reg_desc *rd, - const struct kvm_one_reg *reg, void __user *uaddr) -{ - const u64 id = sys_reg_to_index(rd); - u8 csv2, csv3; - int err; - u64 val; - - err = reg_from_user(&val, uaddr, id); - if (err) - return err; - - /* - * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as - * it doesn't promise more than what is actually provided (the - * guest could otherwise be covered in ectoplasmic residue). - */ - csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_CSV2_SHIFT); - if (csv2 > 1 || - (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; - - /* Same thing for CSV3 */ - csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_CSV3_SHIFT); - if (csv3 > 1 || - (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED)) - return -EINVAL; - - /* We can only differ with CSV[23], and anything else is an error */ - val ^= read_id_reg(vcpu, rd, false); - val &= ~((0xFUL << ID_AA64PFR0_CSV2_SHIFT) | - (0xFUL << ID_AA64PFR0_CSV3_SHIFT)); - if (val) - return -EINVAL; - - vcpu->kvm->arch.pfr0_csv2 = csv2; - vcpu->kvm->arch.pfr0_csv3 = csv3 ; - - return 0; -} - /* cpufeature ID register user accessors */ static int __get_id_reg(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, void __user *uaddr, @@ -1731,8 +1764,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { /* AArch64 ID registers */ /* CRm=4 */ - { SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg, - .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, }, + ID_SANITISED(ID_AA64PFR0_EL1), ID_SANITISED(ID_AA64PFR1_EL1), ID_UNALLOCATED(4,2), ID_UNALLOCATED(4,3), diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c index fc00304fe7d8..f0632b46fbf9 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -117,6 +117,15 @@ int kvm_vgic_create(struct kvm *kvm, u32 type) else INIT_LIST_HEAD(&kvm->arch.vgic.rd_regions); + if (type == KVM_DEV_TYPE_ARM_VGIC_V3) + /* + * Set ID_AA64PFR0_EL1.GIC to 1. This shouldn't fail unless + * any vCPU in the guest have started. + */ + WARN_ON_ONCE(kvm_set_id_reg_feature(kvm, SYS_ID_AA64PFR0_EL1, + ID_AA64PFR0_GIC3, + ID_AA64PFR0_GIC_SHIFT)); + out_unlock: unlock_all_vcpus(kvm); return ret; From patchwork Mon Feb 14 06:57:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 675B7C433F5 for ; Mon, 14 Feb 2022 07:01:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=57O2PzBiUWUJCYcRvxBG0rWE1Y2MX3aQ1bz4oxIBQ0M=; b=qZrgYLXevCT442ZZpcEu/9PWqa m37OjtiYDh40L6ePOCgBeWxZeKIzmtBXnb2wXs9JjfAZa1TUU+ktbzloFZWI3FQCTinck8KBPGXsq x8Ldamf+QkKks5zT1yRLQ8/HnZMYLZlsnL2QVaGsVcwJzNEIZHmqK1qrG8I00Yp0lTVVHFYfTAJ25 RSx1qlPlYFRJDzjlyMchAqGXyF6jduaQkja786Ky1O/yPc9pCvXPwampu/wgUlW++lzhsml27HEm5 lZTN4TGKBe+B/QspoXIspktcrHKIusDvYNeahS7dV/ds8i5DO/RuXXqrhdai/xDndHARZHe7LPsAb M90LOPew==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVL5-00DWiW-IR; Mon, 14 Feb 2022 06:59:59 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVKS-00DWSJ-5G for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 06:59:22 +0000 Received: by mail-pj1-x1049.google.com with SMTP id s10-20020a17090a948a00b001b96be201f6so6202326pjo.4 for ; Sun, 13 Feb 2022 22:59:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nFKWm08/SB0d1ggvgDB8huKOebJguFEKlgavW3DhhV4=; b=SOWCDC49txblCKM7OcSGkDgPljb9jONYJiHVoeUuZ56PTzt2rs3EFqAWPpqr+nGeA2 gkrUnT4xd5S3SJ2gluMqYUAUse6VME5NQ8dyIXWFn47qUH6MiNcFiQ7BiV6qRmf/vmyK e/R7NaRt+xXPMCQMJoJb7JEOWaR7iLQKGr2dp3eEN0cmECwg1LX322YmJIPhW0Jmapmn qolXdtQFRvX2akhBM3XpoSqBAeyMFFrjI4AiIi1Zd85cQ1zHNLI48OGPDUGpbYGRVfS9 T5NWsskLzyDzORA/maWh797SmjqRSzGErm/95RKIFSiideKmVoIQ7SmdQ6j6ZJ6Tn4u/ Hd6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nFKWm08/SB0d1ggvgDB8huKOebJguFEKlgavW3DhhV4=; b=KqdUFoNyvMugbRBHMgmbmtISafkFoU0wUb+5z6OiraJkd3vphOdk6HCMz2LNN6fSPb DfvkqAJm048K832t9wE9LOKowlZhNH8BcI7UfkARvKQBGsIpSRvaY7mLwCqlXwwFFuEm 5vDjkj20L3vAJ6xbcVz4d3Aure8qo/lZQApfGArZeDHbKHgFgYAc149uxGuICVfGPgSc kb6s3LvpTOK/QZsLf5YZENnK1ScCGR4DvlcHqSizJKVcSQuYwV9a6dAfS3zI8ncwCFEo j+oEt6BLiiXDD4w0ypVnl5spg0JRAN0FwtyUq+XFO0QLmLOHzUOxul2usN6QVbA6sqCz BNfw== X-Gm-Message-State: AOAM532QkulNFED0YT39wkiLFmYtwuhWzwfARRnA4cTVLHCxsHkzicgy vCSg1fbzi2YRlyeqZO74gKkx3SRtSQ4= X-Google-Smtp-Source: ABdhPJx2Ec/+nsYcWA84TY73qEpvM2PGqi5gbUk1tbJNdJx+hg4p8r62EcMY+Q+HnKZCTC5AQjivum7e1yM= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a05:6a00:1514:: with SMTP id q20mr13246603pfu.82.1644821958877; Sun, 13 Feb 2022 22:59:18 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:24 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-6-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 05/27] KVM: arm64: Make ID_AA64PFR1_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_225920_269433_E13F5BDD X-CRM114-Status: GOOD ( 18.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds id_reg_info for ID_AA64PFR1_EL1 to make it writable by userspace. Return an error if userspace tries to set MTE field of the register to a value that conflicts with KVM_CAP_ARM_MTE configuration for the guest. Skip fractional feature fields validation at present and they will be handled by the following patches. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/kvm/sys_regs.c | 42 +++++++++++++++++++++++++++++---- 2 files changed, 39 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index bfdf32ff5985..b7fb26f5a8f8 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -816,6 +816,7 @@ #define ID_AA64PFR0_GIC3 0x1 /* id_aa64pfr1 */ +#define ID_AA64PFR1_CSV2FRAC_SHIFT 32 #define ID_AA64PFR1_MPAMFRAC_SHIFT 16 #define ID_AA64PFR1_RASFRAC_SHIFT 12 #define ID_AA64PFR1_MTE_SHIFT 8 diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 14df7c112e57..b41e9662736d 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -372,6 +372,21 @@ static int validate_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, return 0; } +static int validate_id_aa64pfr1_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + bool kvm_mte = kvm_has_mte(vcpu->kvm); + unsigned int mte; + + mte = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR1_MTE_SHIFT); + + /* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT. */ + if (kvm_mte ^ (mte > 0)) + return -EPERM; + + return 0; +} + static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) { u64 limit = id_reg->vcpu_limit_val; @@ -403,12 +418,24 @@ static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) id_reg->vcpu_limit_val = limit; } +static void init_id_aa64pfr1_el1_info(struct id_reg_info *id_reg) +{ + if (!system_supports_mte()) + id_reg->vcpu_limit_val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE); +} + static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, const struct id_reg_info *idr) { return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE); } +static u64 vcpu_mask_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu, + const struct id_reg_info *idr) +{ + return kvm_has_mte(vcpu->kvm) ? 0 : (ARM64_FEATURE_MASK(ID_AA64PFR1_MTE)); +} + static struct id_reg_info id_aa64pfr0_el1_info = { .sys_reg = SYS_ID_AA64PFR0_EL1, .ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), @@ -417,6 +444,16 @@ static struct id_reg_info id_aa64pfr0_el1_info = { .vcpu_mask = vcpu_mask_id_aa64pfr0_el1, }; +static struct id_reg_info id_aa64pfr1_el1_info = { + .sys_reg = SYS_ID_AA64PFR1_EL1, + .ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR1_RASFRAC) | + ARM64_FEATURE_MASK(ID_AA64PFR1_MPAMFRAC) | + ARM64_FEATURE_MASK(ID_AA64PFR1_CSV2FRAC), + .init = init_id_aa64pfr1_el1_info, + .validate = validate_id_aa64pfr1_el1, + .vcpu_mask = vcpu_mask_id_aa64pfr1_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -427,6 +464,7 @@ static struct id_reg_info id_aa64pfr0_el1_info = { #define GET_ID_REG_INFO(id) (id_reg_info_table[IDREG_IDX(id)]) static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { [IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = &id_aa64pfr0_el1_info, + [IDREG_IDX(SYS_ID_AA64PFR1_EL1)] = &id_aa64pfr1_el1_info, }; static int validate_id_reg(struct kvm_vcpu *vcpu, u32 id, u64 val) @@ -1351,10 +1389,6 @@ static u64 __read_id_reg(const struct kvm_vcpu *vcpu, u32 id) val &= ~(id_reg->vcpu_mask(vcpu, id_reg)); switch (id) { - case SYS_ID_AA64PFR1_EL1: - if (!kvm_has_mte(vcpu->kvm)) - val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE); - break; case SYS_ID_AA64ISAR1_EL1: if (!vcpu_has_ptrauth(vcpu)) val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) | From patchwork Mon Feb 14 06:57:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745008 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CF60BC433F5 for ; Mon, 14 Feb 2022 07:02:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lvoO8766rWm5F4bgkPhrJLdGEduU1EEnsxrTS8CeyBA=; b=pDv5Xqiqc5h6E2FuhwJKboaQxe PKOGVEuJ6ZNMnmlJZKjK5DbHy9A3YiuHclE7IlMPap8ORIP+Wbgf1oJ0Lqr6LYYoIW2b5EWPmSnf8 e5j5RTFxdZYfAZfJ7rSA0SgapbWhIeeLmH6gvmjGFcmcxd8f578k7Cyp+z9xdlY0R22EmATLncbZ0 EFmVOlY5ve5NkmUupyh5qk+/lfuUoaL3u+y0Ux5jv3q8ErDba8SoijQrRzRxxJ56rJsb0qoZYXylp lWFkFweW3JHG0UCg6T4w2NRrSUUVqagi63zI2vkeDdGe/a2WWegBS58mOllxYYd2tcetpW/tGGPo7 Mid08oOw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVLT-00DWxb-5T; Mon, 14 Feb 2022 07:00:23 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVKd-00DWX8-UM for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 06:59:33 +0000 Received: by mail-pf1-x44a.google.com with SMTP id i16-20020aa78d90000000b004be3e88d746so11101946pfr.13 for ; Sun, 13 Feb 2022 22:59:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=M/a/Vc/shmJJLkjZEVBmniQlBoL+OJyInbuc84cKOVo=; b=oHiVtTsoFuP7yxvyHQnSpouOUcn7G4EYC+Ud3GVI/+GZOlUg97m7sK+ac0e7acp21k br/kA9rY0/54R62sM2+J6XotY6GpkxYuJsq96VMGi2rokSONE9v65Nk0WLHRhgqQ+kkz j6XS69iblworG4svJ04iUpjYT8hYS4RSKesB+K2856MWjkpvigJyP0bOMW1mo2gcJBrl 5Qj4dZUhifjRold70IjyqXd944ao9fLKeVarR4PxB6/a47gB3xqndq4YcD3bYKqjmNBf ktzk+MGTgVt6OZbt1a0wY1KSoLsOl64r7hw+BPjxAIQ1OfkQjvNbPcsoT3XoxHuKUeDq 31Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=M/a/Vc/shmJJLkjZEVBmniQlBoL+OJyInbuc84cKOVo=; b=F7gq+UaIBYIpXCcZbkudoIv41ei+7gNXXPZV57JbPltT/7RuTSGXcnTM5JLZez6HZc KOyV5yx02XuV1dL57H+Y1cRq+avxlAUzs6cjkODzn5Xsg/lLiHFoIvaAnJG8iCqc/IDx d2zGIanbJmWa2Q5NNaSOX9+A+OJB9jJXk5qMcBng7lzE8h5KYzOl0ZU8NEePCnJGZIFi FVfNhHig1msjENRR5vgsjj9onAOij2xwoZgYcBqHwhsE0IItcqbDdPMhm+1FMzjIQJj9 7IIzBERzea/IyPC33E87iQfnyMUJE0FeDXhBOfof1jBV5OyFilu4vSgy0gfmh4ctHwLT 6qSA== X-Gm-Message-State: AOAM533RN3eIEQXjR2fBqjYHq55lCIdG+NGGB1fuxsZ44Tki2CcSA66z z4VNEWSHAfur6s9ZZngltuvlFD71BZE= X-Google-Smtp-Source: ABdhPJwsfhL7pM3Pz0k0600RZWcFVmJjjZYx2f1pfIZDVbRupYBHsW6Obk+gUclfejqilFnthPKptjfIvSo= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90b:4c8e:: with SMTP id my14mr1640298pjb.0.1644821969667; Sun, 13 Feb 2022 22:59:29 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:25 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-7-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 06/27] KVM: arm64: Make ID_AA64ISAR0_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_225932_026421_84FA5D28 X-CRM114-Status: GOOD ( 12.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds id_reg_info for ID_AA64ISAR0_EL1 to make it writable by userspace. Updating sm3, sm4, sha1, sha2 and sha3 fields are allowed only if values of those fields follow Arm ARM. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index b41e9662736d..eb2ae03cbf54 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -387,6 +387,29 @@ static int validate_id_aa64pfr1_el1(struct kvm_vcpu *vcpu, return 0; } +static int validate_id_aa64isar0_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + unsigned int sm3, sm4, sha1, sha2, sha3; + + /* Run consistency checkings according to Arm ARM */ + sm3 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SM3_SHIFT); + sm4 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SM4_SHIFT); + if (sm3 != sm4) + return -EINVAL; + + sha1 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA1_SHIFT); + sha2 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA2_SHIFT); + if ((sha1 == 0) ^ (sha2 == 0)) + return -EINVAL; + + sha3 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA3_SHIFT); + if (((sha2 == 2) ^ (sha3 == 1)) || (!sha1 && sha3)) + return -EINVAL; + + return 0; +} + static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) { u64 limit = id_reg->vcpu_limit_val; @@ -454,6 +477,11 @@ static struct id_reg_info id_aa64pfr1_el1_info = { .vcpu_mask = vcpu_mask_id_aa64pfr1_el1, }; +static struct id_reg_info id_aa64isar0_el1_info = { + .sys_reg = SYS_ID_AA64ISAR0_EL1, + .validate = validate_id_aa64isar0_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -465,6 +493,7 @@ static struct id_reg_info id_aa64pfr1_el1_info = { static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { [IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = &id_aa64pfr0_el1_info, [IDREG_IDX(SYS_ID_AA64PFR1_EL1)] = &id_aa64pfr1_el1_info, + [IDREG_IDX(SYS_ID_AA64ISAR0_EL1)] = &id_aa64isar0_el1_info, }; static int validate_id_reg(struct kvm_vcpu *vcpu, u32 id, u64 val) From patchwork Mon Feb 14 06:57:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2077FC433EF for ; Mon, 14 Feb 2022 07:02:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=7/mKrP197/vT5UfZUKMs3vJYDBofw49YG6RCxyQzGE4=; b=FJKIcyaJ1rZb8eqAq/MGtIkdsw 6f4spYxUuKbHxFuSJbHqvmGNjp0/udUTd5IytXaKhG6NOwIgw4vm+tqjQlRsTw55ZSOBrY/iuzpoK jD+sFwQrXth6WhqUJpFa5XXPESBDhiHkGRog5SmC2D2HEvBipaItt65ypvozpD4nMHNhh4Yv4fPpA OAOz10zjTCi17Aw4CHvh9gCUeaO/H+DeF98KpMjmiPjeJi43hEJtenbHot96DVbaHp6zSFcJxYFJV 5ESKUgiGMRVBTkDAQTDfxN0qzSRHFDqbGJQRFtnMFwa3GI5TNauqgQ+wU6fPbnndztrg8mDErGgs6 xZ0NJKaA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVLw-00DXCP-L2; Mon, 14 Feb 2022 07:00:52 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVKi-00DWa5-ND for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 06:59:38 +0000 Received: by mail-pj1-x104a.google.com with SMTP id a15-20020a17090ad80f00b001b8a1e1da50so11510322pjv.6 for ; Sun, 13 Feb 2022 22:59:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9HWK2NuyhhyEgB8wjO5dIoYbvn8UVleDNCoL8GsEyHo=; b=ZetGQBiCPI1x9OJJic+8ZrDGNot52XaUnaWWc3V8haKmWLaCliopibcQ4OPJfG2wd4 MiC1wGtd+xOxrH/A1++qveFTpTi7WJ7MpUng8sOWpJkQXjnH4E74mZwiysSq3bvAHR9n pAxElk0XlPcknnyNNIMu/CfIEGd0bpa807HcGSJibtt8CbezAabn+IxSaMiglSfuOXGO 4MOboLh83MQsBy7ErlmYnmxq+5D7eV9sqRZ86wB6ZGaaNkDmbxDwHV7uIvuAuBmX3XSP W39PDgD8eGr0KgFO4hX8N4L8m5qfGofJFd50151Fmv1eQXiE/eQf9SgehXDP1UvjiqP0 qRZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9HWK2NuyhhyEgB8wjO5dIoYbvn8UVleDNCoL8GsEyHo=; b=EQViZDoW277kyxJBs7nUbiSeFWzIZmNafIG6r27kQXL7Wp86t91w9eZRadSluHgv7w ZynJpX0WLMEymC0zKDDG10fRxFXX/eoSJ04GWVGslZRMQovZ05ytYKe05ztGc3XVHgAn JnWwTT2vTr7xrqwoLw7bpMTxf4y2+e/uHUpJLgEnsmtRd1Eq3VXnKR0qGUVqnPKROt5X CBpmWaFq52r5De+WF7TwMshfuzAyB7fksWhWk6wvA6CYkfqaQ+K8KmBAqeWyFmxGvrXw 2jlV2RerwuCb+Hg1zQFQMPW5RuJDGoCJ4IzEx5yUtxAT96USReUKj/p/9O1TmrCO2+hf Gw7A== X-Gm-Message-State: AOAM530hDEikMb06QuCiLnhBTQZiCihJjtyVGNuXntfE6ipJVUovDIlY UuPcOPhGpY3q8p5AvUY0qo+6N3fLejs= X-Google-Smtp-Source: ABdhPJyCaHptDFMSE+RPFCR/UHTTbzXP99lLXwipwWCS7IQN6bwAutS1tWd7nWyBwrjwPq0xKJD0r1fIUkI= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:902:c94c:: with SMTP id i12mr12741956pla.18.1644821975617; Sun, 13 Feb 2022 22:59:35 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:26 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-8-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 07/27] KVM: arm64: Make ID_AA64ISAR1_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_225936_826206_72110FD2 X-CRM114-Status: GOOD ( 18.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds id_reg_info for ID_AA64ISAR1_EL1 to make it writable by userspace. Return an error if userspace tries to set PTRAUTH related fields of the register to values that conflict with PTRAUTH configuration, which was configured by KVM_ARM_VCPU_INIT, for the guest. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 77 +++++++++++++++++++++++++++++++++++---- 1 file changed, 69 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index eb2ae03cbf54..7032a7285447 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -265,6 +265,24 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu, return read_zero(vcpu, p); } +#define PTRAUTH_MASK (ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_API) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI)) + +#define aa64isar1_has_apa(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_APA_SHIFT) >= \ + ID_AA64ISAR1_APA_ARCHITECTED) +#define aa64isar1_has_api(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_API_SHIFT) >= \ + ID_AA64ISAR1_API_IMP_DEF) +#define aa64isar1_has_gpa(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPA_SHIFT) >= \ + ID_AA64ISAR1_GPA_ARCHITECTED) +#define aa64isar1_has_gpi(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPI_SHIFT) >= \ + ID_AA64ISAR1_GPI_IMP_DEF) + struct id_reg_info { /* Register ID */ u32 sys_reg; @@ -410,6 +428,36 @@ static int validate_id_aa64isar0_el1(struct kvm_vcpu *vcpu, return 0; } +static int validate_id_aa64isar1_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + bool has_gpi, has_gpa, has_api, has_apa; + bool generic, address; + + has_gpi = aa64isar1_has_gpi(val); + has_gpa = aa64isar1_has_gpa(val); + has_api = aa64isar1_has_api(val); + has_apa = aa64isar1_has_apa(val); + if ((has_gpi && has_gpa) || (has_api && has_apa)) + return -EINVAL; + + generic = has_gpi || has_gpa; + address = has_api || has_apa; + /* + * Since the current KVM guest implementation works by enabling + * both address/generic pointer authentication features, + * return an error if they conflict. + */ + if (generic ^ address) + return -EPERM; + + /* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */ + if (vcpu_has_ptrauth(vcpu) ^ (generic && address)) + return -EPERM; + + return 0; +} + static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) { u64 limit = id_reg->vcpu_limit_val; @@ -447,8 +495,14 @@ static void init_id_aa64pfr1_el1_info(struct id_reg_info *id_reg) id_reg->vcpu_limit_val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE); } +static void init_id_aa64isar1_el1_info(struct id_reg_info *id_reg) +{ + if (!system_has_full_ptr_auth()) + id_reg->vcpu_limit_val &= ~PTRAUTH_MASK; +} + static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, - const struct id_reg_info *idr) + const struct id_reg_info *idr) { return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE); } @@ -459,6 +513,12 @@ static u64 vcpu_mask_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu, return kvm_has_mte(vcpu->kvm) ? 0 : (ARM64_FEATURE_MASK(ID_AA64PFR1_MTE)); } +static u64 vcpu_mask_id_aa64isar1_el1(const struct kvm_vcpu *vcpu, + const struct id_reg_info *idr) +{ + return vcpu_has_ptrauth(vcpu) ? 0 : PTRAUTH_MASK; +} + static struct id_reg_info id_aa64pfr0_el1_info = { .sys_reg = SYS_ID_AA64PFR0_EL1, .ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), @@ -482,6 +542,13 @@ static struct id_reg_info id_aa64isar0_el1_info = { .validate = validate_id_aa64isar0_el1, }; +static struct id_reg_info id_aa64isar1_el1_info = { + .sys_reg = SYS_ID_AA64ISAR1_EL1, + .init = init_id_aa64isar1_el1_info, + .validate = validate_id_aa64isar1_el1, + .vcpu_mask = vcpu_mask_id_aa64isar1_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -494,6 +561,7 @@ static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { [IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = &id_aa64pfr0_el1_info, [IDREG_IDX(SYS_ID_AA64PFR1_EL1)] = &id_aa64pfr1_el1_info, [IDREG_IDX(SYS_ID_AA64ISAR0_EL1)] = &id_aa64isar0_el1_info, + [IDREG_IDX(SYS_ID_AA64ISAR1_EL1)] = &id_aa64isar1_el1_info, }; static int validate_id_reg(struct kvm_vcpu *vcpu, u32 id, u64 val) @@ -1418,13 +1486,6 @@ static u64 __read_id_reg(const struct kvm_vcpu *vcpu, u32 id) val &= ~(id_reg->vcpu_mask(vcpu, id_reg)); switch (id) { - case SYS_ID_AA64ISAR1_EL1: - if (!vcpu_has_ptrauth(vcpu)) - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_API) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI)); - break; case SYS_ID_AA64DFR0_EL1: /* Limit debug to ARMv8.0 */ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER); From patchwork Mon Feb 14 06:57:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745044 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A8EFCC433F5 for ; Mon, 14 Feb 2022 07:03:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=RZB/kapvXFJ8/k+AAMuemYS/O+QFvyP4FMl5PMqNMqM=; b=xSmiuhVg5SI4UJfsu0OcX3MhH7 A5RSblwb1Wm8qxmKGmwKdFJQcIC6ANqvGLk4GEmAKzsyPGnPLvxlTllLJ8EUsudxIiHgaVFnKUr+I tKZl53tKbYWIff2KWy4UBwQ/AC/mRvoT4ydO4uOD5IuEzICG+YsfoQ4wiwVzpL7lGQ8mFxjIGHDBa yk2K/PkYOzVxERZF7z1LPe9lz2Ox+2qMl83kFLZEU8eGA9jS8g7hssZa7dM/I85ylSqEzEhJEKAZj dYov8VFpQ8GxrW4YYB2DkWpNC9YoenZgitniiH8TwQCVY4AiPTrWcad5IXti7QeASvQrIxW7ainHo we61ALgA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVMb-00DXVP-CR; Mon, 14 Feb 2022 07:01:33 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVKr-00DWdI-5V for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 06:59:49 +0000 Received: by mail-pl1-x649.google.com with SMTP id q23-20020a170902edd700b0014ed722ba9cso3280600plk.18 for ; Sun, 13 Feb 2022 22:59:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6OEjGTSBcs9E9DCSR861EY1Iv8ylt1L45PBzbTqlbig=; b=lkbqq2YiEoOz++TGO9eSGDVy2tgftXNGnR0+H6WJpB+lJCJDvtMKow42G9L4fUF/od pgKio252qg02AwgrHRhNYR1CWBggMhWSzoArmmDelMzcIIN5+N7fpYoBVGseGIhexM2p lSkqlvmx5cZaNdofdlPNZa8fGOyBHoA0Pm0nIe0vbck8hYu2tho6Hq1CcR1s5/6Wn+uc Aj1zlzSamSCaV7cn4axUM8UAiqamPeNI0mJk5Od6Yb8gfBN0YflZX2U1oL4Nt8U1Kr5H XhzFpYPk+lI1OTXs5lM8jtKca9/ZVWLHZ8uBJWeggDgx/aiTRfw0mQ6k42B/i+OIw336 ogGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6OEjGTSBcs9E9DCSR861EY1Iv8ylt1L45PBzbTqlbig=; b=6wNL+YoJPdpL63HR3+j1+I4iNOnLURt86NtkgLT3Jzv/MR5kPSsKIe5ls1NDPMgSe1 8HWz4FwTcHONZrNoYp+gvG/JvdEfO13oyIA07afd7lta8yjAII+ZVeu+gd+WoJgYbP6e 1qIUBhl50LAaN/i/16ljq4pImuvJK2PBn5EsM4eHx21pkCkT4xOVUOBVXzp9BkyU8j/P GN9wlhk9odFJLk6mZuo8mgYf/PYEmlN1O9m/OwuBob5FGZZADw8fyhHXhnWM0gXZDPdX knl3P1W9jPLlweaQ+Om0UV+CIiAOpclxIJmMLFH8r5UJ8WWkEtNibH8PebdOzjMRp/+n drug== X-Gm-Message-State: AOAM5325zzSjgq9Onxh2h9OXzz/j4sOU8VfUulpbbW6pU/2cMkH+9t3o u+ikRjPE68MroLfm9KFrvf3IrfUFZkI= X-Google-Smtp-Source: ABdhPJxAahVKstmCyHWIx48Kjk5Z/Shz7FLxfutP2t3sqVVA2Gn1b8K1HKbt0o57a5EH+d4hHgCkNx0hU6E= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:902:cec9:: with SMTP id d9mr12377298plg.128.1644821983308; Sun, 13 Feb 2022 22:59:43 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:27 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-9-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 08/27] KVM: arm64: Make ID_AA64MMFR0_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_225945_266968_04F9A163 X-CRM114-Status: GOOD ( 21.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds id_reg_info for ID_AA64MMFR0_EL1 to make it writable by userspace. Since ID_AA64MMFR0_EL1 stage 2 granule size fields don't follow the standard ID scheme, we need a special handling to validate those fields. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 127 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 127 insertions(+) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 7032a7285447..4ed15ae7f160 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -458,6 +458,118 @@ static int validate_id_aa64isar1_el1(struct kvm_vcpu *vcpu, return 0; } +/* + * Check if the requested stage2 translation granule size indicated in + * @mmfr0 is also indicated in @mmfr0_lim. + * If TGranX_2 field is zero, the value must be validated based on TGranX + * field because that indicates the feature support is identified in + * TGranX field. + * This function relies on the fact TGranX fields are validated before + * through arm64_check_features_kvm. + */ +static int aa64mmfr0_tgran2_check(int field, u64 mmfr0, u64 mmfr0_lim) +{ + s64 tgran2, lim_tgran2, rtgran1; + int f1; + bool is_signed; + + tgran2 = cpuid_feature_extract_unsigned_field(mmfr0, field); + lim_tgran2 = cpuid_feature_extract_unsigned_field(mmfr0_lim, field); + if (tgran2 && lim_tgran2) + /* + * We don't need to check TGranX field. We can simply + * compare tgran2 and lim_tgran2. + */ + return (tgran2 > lim_tgran2) ? -E2BIG : 0; + + if (tgran2 == lim_tgran2) + /* + * Both of them are zero. Since TGranX in @mmfr0 is already + * validated by arm64_check_features_kvm, tgran2 must be fine. + */ + return 0; + + /* + * Either tgran2 or lim_tgran2 is zero. + * Need stage1 granule size to validate tgran2. + */ + + /* + * Get TGranX's bit position by subtracting 12 from TGranX_2's bit + * position. + */ + f1 = field - 12; + + /* TGran4/TGran64 is signed and TGran16 is unsigned field. */ + is_signed = (f1 == ID_AA64MMFR0_TGRAN16_SHIFT) ? false : true; + + /* + * If tgran2 == 0 (&& lim_tgran2 != 0), the requested stage2 granule + * size is indicated in the stage1 granule size field of @mmfr0. + * So, validate the stage1 granule size against the stage2 limit + * granule size. + * If lim_tgran2 == 0 (&& tgran2 != 0), the stage2 limit granule size + * is indicated in the stage1 granule size field of @mmfr0_lim. + * So, validate the requested stage2 granule size against the stage1 + * limit granule size. + */ + + /* Get the relevant stage1 granule size to validate tgran2 */ + if (tgran2 == 0) + /* The requested stage1 granule size */ + rtgran1 = cpuid_feature_extract_field(mmfr0, f1, is_signed); + else /* lim_tgran2 == 0 */ + /* The stage1 limit granule size */ + rtgran1 = cpuid_feature_extract_field(mmfr0_lim, f1, is_signed); + + /* + * Adjust the value of rtgran1 to compare with stage2 granule size, + * which indicates: 1: Not supported, 2: Supported, etc. + */ + if (is_signed) + /* For signed, -1: Not supported, 0: Supported, etc. */ + rtgran1 += 0x2; + else + /* For unsigned, 0: Not supported, 1: Supported, etc. */ + rtgran1 += 0x1; + + if ((tgran2 == 0) && (rtgran1 > lim_tgran2)) + /* + * The requested stage1 granule size (== the requested stage2 + * granule size) is larger than the stage2 limit granule size. + */ + return -E2BIG; + else if ((lim_tgran2 == 0) && (tgran2 > rtgran1)) + /* + * The requested stage2 granule size is larger than the stage1 + * limit granulze size (== the stage2 limit granule size). + */ + return -E2BIG; + + return 0; +} + +static int validate_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + u64 limit = id_reg->vcpu_limit_val; + int ret; + + ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN4_2_SHIFT, val, limit); + if (ret) + return ret; + + ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN64_2_SHIFT, val, limit); + if (ret) + return ret; + + ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN16_2_SHIFT, val, limit); + if (ret) + return ret; + + return 0; +} + static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) { u64 limit = id_reg->vcpu_limit_val; @@ -549,6 +661,20 @@ static struct id_reg_info id_aa64isar1_el1_info = { .vcpu_mask = vcpu_mask_id_aa64isar1_el1, }; +static struct id_reg_info id_aa64mmfr0_el1_info = { + .sys_reg = SYS_ID_AA64MMFR0_EL1, + /* + * When TGranX_2 value is 0, validity of the value depend on TGranX + * value, and TGranX_2 value must be validated against TGranX value, + * which is done by validate_id_aa64mmfr0_el1. + * So, skip the regular validity checking for TGranX_2 fields. + */ + .ignore_mask = ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN4_2) | + ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN64_2) | + ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN16_2), + .validate = validate_id_aa64mmfr0_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -562,6 +688,7 @@ static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { [IDREG_IDX(SYS_ID_AA64PFR1_EL1)] = &id_aa64pfr1_el1_info, [IDREG_IDX(SYS_ID_AA64ISAR0_EL1)] = &id_aa64isar0_el1_info, [IDREG_IDX(SYS_ID_AA64ISAR1_EL1)] = &id_aa64isar1_el1_info, + [IDREG_IDX(SYS_ID_AA64MMFR0_EL1)] = &id_aa64mmfr0_el1_info, }; static int validate_id_reg(struct kvm_vcpu *vcpu, u32 id, u64 val) From patchwork Mon Feb 14 06:57:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 91CE5C433EF for ; Mon, 14 Feb 2022 07:03:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=CvBWgAh7hVIKA+qrU+D+wmKV8iGc9UbBTZ7CQc6ajWA=; b=3gaRmfTH/zW11ujf5/aYxCxLsP nLQW7NGQMDwENbr3mkHIr8wWSNNxnPjA4TEKirnpi7i85HBpkmTbe+YFbJm1btAsIC5khtuqxvfaQ B49WBgRUy/xT7a9XsGEjPQ7TNlKiAG0KoKyF7hXJbih+ROncHaUG/v5KJ3MaPNMF0nhRxAc5LTjxb 1ZjvIs575ZRhswbr8jvtVbH4vpYcqG+bUs8+KE832LaN6OTZnSM6dhj2XdsNrA+mxH7xervQT9i8g 3x6YvKZdmM3BIGeWa3f6PBRcpEwBQXju027+bcoSLsPHuwOtUOyIKDpIwWVg72p5qB8ujDs5KE1Lm T6hbb7Ag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVNW-00DXy0-TY; Mon, 14 Feb 2022 07:02:32 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVKx-00DWgA-Ne for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 06:59:53 +0000 Received: by mail-pj1-x1049.google.com with SMTP id c6-20020a17090a020600b001b9b44c18aaso3095933pjc.4 for ; Sun, 13 Feb 2022 22:59:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Brz6Y3j0MZViYAVMVNSgTmSSQnU2BlZWdtLqRqNaGsU=; b=Zu/UInFm39+Vm/bojUEccM94nZ9lA0DMX2QsEIaUyNi8fhkgvXbwYgWeczj5b+togF OMZJQ6q0qT+ld4tcgGrXQqv4ToFMTZ2Y9L/bIMZGz6WCDzumtbAA6qh+dQhCaKD8U1Fc cUTF8pUdBeTFJgl/6rtcjojOSrcXYXYnlWZF9OiwZE8qdd9uihGFSmBUwXW+YpTykR0w do5vJatra5x1BNWBA02xXjlt/ghW9hNKqzsVg8YLdRuoEMKUWLmDdsNZUnR7NvAdwQlt +kQLPlttHiIFiijR9O85L0qGSS2/OsSZMNIuhy5no5X7OsBRsee+pRBI0k8KPonH4m0z QRIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Brz6Y3j0MZViYAVMVNSgTmSSQnU2BlZWdtLqRqNaGsU=; b=nEZQULIHdJGhc3AklSztjfd4qeNalbfLOw67dHWPknqc7g/FlEgx51wsDfkqoiQpkH d9Fm3lYJjM592ge/2Ho2v8283Tp8hm5wOGLGSgaDHtdarIdXxmbyebpqOBpmW3FHKl4Y 88KvBrbl+CVD4YuS3H0S6W2ff2AHk0BWnKWlw28tMsLYiqVOT7zt0FkHkamf5R3TQf31 PQexWj7xvavv2Jlm8bg5bdmLqlDrLrbVw7Cj+4ulNBbCYEhpGI9TsadcV6OIqG+w0YDQ PI+gUrvzOIXde79khqHiV5T0nplohx8YRf9FA+jzIDglEne3QCaZ2v1pWIT5wZToA2Wl +krw== X-Gm-Message-State: AOAM532bcN5Js/yZjGoyLHC38TGfqU0YLBw66wINaVwthGAZvEDmixPu YXl4BI0nGT7PjjYRDEyKfsOLVmObaRc= X-Google-Smtp-Source: ABdhPJwWwbe9hH9AKFYWngQ18eSzKL4vUbmfnE2yQiNzgqPynQfz0rkiJhzI+TM01pY5PO5th91tCagghrU= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a05:6a00:894:: with SMTP id q20mr12856682pfj.79.1644821989848; Sun, 13 Feb 2022 22:59:49 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:28 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-10-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 09/27] KVM: arm64: Make ID_AA64MMFR1_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_225951_815324_9736E729 X-CRM114-Status: GOOD ( 15.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds id_reg_info for ID_AA64MMFR1_EL1 to make it writable by userspace. Hardware update of Access flag and/or Dirty state in translation table needs to be disabled for the guest to let userspace set ID_AA64MMFR1_EL1.HAFDBS field to a lower value. It requires trapping the guest's accessing TCR_EL1, which KVM doesn't always do (in order to trap it without FEAT_FGT, HCR_EL2.TVM needs to be set to 1, which also traps many other virtual memory control registers). So, userspace won't be allowed to modify the value of the HAFDBS field. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 4ed15ae7f160..1c137f8c236f 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -570,6 +570,30 @@ static int validate_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu, return 0; } +static int validate_id_aa64mmfr1_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + u64 limit = id_reg->vcpu_limit_val; + unsigned int hafdbs, lim_hafdbs; + + hafdbs = cpuid_feature_extract_unsigned_field(val, ID_AA64MMFR1_HADBS_SHIFT); + lim_hafdbs = cpuid_feature_extract_unsigned_field(limit, ID_AA64MMFR1_HADBS_SHIFT); + + /* + * Don't allow userspace to modify the value of HAFDBS. + * Hardware update of Access flag and/or Dirty state in translation + * table needs to be disabled for the guest to let userspace set + * HAFDBS field to a lower value. It requires trapping the guest's + * accessing TCR_EL1, which KVM doesn't always do (in order to trap + * it without FEAT_FGT, HCR_EL2.TVM needs to be set to 1, which also + * traps many other virtual memory control registers). + */ + if (hafdbs != lim_hafdbs) + return -EINVAL; + + return 0; +} + static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) { u64 limit = id_reg->vcpu_limit_val; @@ -675,6 +699,11 @@ static struct id_reg_info id_aa64mmfr0_el1_info = { .validate = validate_id_aa64mmfr0_el1, }; +static struct id_reg_info id_aa64mmfr1_el1_info = { + .sys_reg = SYS_ID_AA64MMFR1_EL1, + .validate = validate_id_aa64mmfr1_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -689,6 +718,7 @@ static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { [IDREG_IDX(SYS_ID_AA64ISAR0_EL1)] = &id_aa64isar0_el1_info, [IDREG_IDX(SYS_ID_AA64ISAR1_EL1)] = &id_aa64isar1_el1_info, [IDREG_IDX(SYS_ID_AA64MMFR0_EL1)] = &id_aa64mmfr0_el1_info, + [IDREG_IDX(SYS_ID_AA64MMFR1_EL1)] = &id_aa64mmfr1_el1_info, }; static int validate_id_reg(struct kvm_vcpu *vcpu, u32 id, u64 val) From patchwork Mon Feb 14 06:57:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 729CBC4332F for ; Mon, 14 Feb 2022 07:04:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lkTQZ8dNhOPDSUF1DL91iFeRW1Gyeu+cwWMT+rB/qag=; b=rl2vjFo8ABJqhjGiga1A+AOlPr z9HOtF0tAcbJhmOcmukw9nhGWKkslbbiT+mLwNk7Uwtg80qTE8L6sCYKyLwK7mR3fg9CDNydZAqxu O6PiQpqZe9wCXXyr4NXEEz7N7i1NBPCGzN9KMTfs9LbVT15TKvrkvw+mCpYn1lE+lGPhmRM6oBk55 jMSSg0e28WS2PV0SGV9vmzR+6/zxY2dX6efL/WX5UNi24OHMw9WUQ4pNtJMJVAnklSG15V4ymCG1C PDj5jN77rK/XPYbN+N0o6QmvUc7TLKeHcs26eoRPApU7RwO8GGmOQDeNnmbp077bGZjUbxg/SBw9m hCQYS9yg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVNt-00DYAb-PY; Mon, 14 Feb 2022 07:02:53 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVL3-00DWjR-6G for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 06:59:59 +0000 Received: by mail-pj1-x104a.google.com with SMTP id y10-20020a17090a134a00b001b8b7e5983bso10291216pjf.6 for ; Sun, 13 Feb 2022 22:59:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=03TUyFaPg0/Gjl+vQfTSvgoxQ+5uxs9Hi6MR9A6wLJ8=; b=KcLvYfrEmbcbn9/knN5ogFzworOFEeVL5ZK8AZWWTFU0yGrQ6hL5mDfGOL7otoY7DR MyXrJDy49UxK0NIylHzK5Kb2vY5ZA3xGz37wsQNkJTS8gxji/7yEkk7ZAj6jvZCtspx8 MdP3LyyAIa4guOEcXFwmjfL55hNOw3+XlP66BJuic56ao47fmmNw8vZtsWUpmU0kdsAz QiuI4vQKwMgyMdez4e0eoBFAWk5ZE58xz9J0/ZqFIYYdwrRbzcYtVJDPR5cHipUfImVa 1KH0lY3Hlg2gzWwhjWzN3a1U1mJ/R4Am9ZBJ6ItMl/BtBeQ6A5e//6G8Y//FT3r6R1AF 259A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=03TUyFaPg0/Gjl+vQfTSvgoxQ+5uxs9Hi6MR9A6wLJ8=; b=6fa+nokLFvu6c2CzKexfzNdtm/4m5EDdYD8Pcd99O8LdZdR1TJVr92+p6DE0eobzX/ GBN5fKvMNxtCT97IyJ/46GuHlVR3oMhL4TPSBpeRE58O4ABG1SlE2YAu0mrlxU8oMnK2 ZWAXKxffDgtYQX6jdFfNT3zDJLaHypdgpMKGszUi0sn82TOz4yYdCfuFmV6lNbCDnHv5 lKt9rtXkIWamy4hs1V86FxS3ZBYbK1GsVo2NKDB5YkBfpH4FFUHvz2rat3kV9p5gMBUF jnX+1kDdUTR6gvKPXlrEWOrlUqcQYYolPFy4r+A8h238jF0LZEGtM46hOZ8w5JvYkeOQ MTvw== X-Gm-Message-State: AOAM530Ik7OceMAiQwedwniGI1SzVULHWzwsH+HjWOK6/cQCIYLvBVRM mTvMY43z1ll+aRdzCT6BKPSWO8seZtM= X-Google-Smtp-Source: ABdhPJy+Udb+Z+ZE+Rw3PccqevtrcStNlD43OiYvy5Bfz0Nf6G3t2aLz7nIFNg+rtbp1DbKZPXtNfE5VWfk= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:903:2441:: with SMTP id l1mr12577060pls.142.1644821995705; Sun, 13 Feb 2022 22:59:55 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:29 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-11-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 10/27] KVM: arm64: Hide IMPLEMENTATION DEFINED PMU support for the guest From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_225957_320097_2A8561FD X-CRM114-Status: GOOD ( 12.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When ID_AA64DFR0_EL1.PMUVER or ID_DFR0_EL1.PERFMON is 0xf, which means IMPLEMENTATION DEFINED PMU supported, KVM unconditionally expose the value for the guest as it is. Since KVM doesn't support IMPLEMENTATION DEFINED PMU for the guest, in that case KVM should expose 0x0 (PMU is not implemented) instead. Change cpuid_feature_cap_perfmon_field() to update the field value to 0x0 when it is 0xf. Fixes: 8e35aa642ee4 ("arm64: cpufeature: Extract capped perfmon fields") Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/cpufeature.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index a9edf1ca7dcb..375c9cd0123c 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -553,7 +553,7 @@ cpuid_feature_cap_perfmon_field(u64 features, int field, u64 cap) /* Treat IMPLEMENTATION DEFINED functionality as unimplemented */ if (val == ID_AA64DFR0_PMUVER_IMP_DEF) - val = 0; + return (features & ~mask); if (val > cap) { features &= ~mask; From patchwork Mon Feb 14 06:57:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2AFEDC433EF for ; Mon, 14 Feb 2022 07:05:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=0KsFlV1xVUwo64UjfmDCOg8Li5o3/tFOnG/asBjGlcQ=; b=u4NTiiXcgbg/pySzd7aZTNGrL9 DpkFR34U/4GNskKLLp6djo4GEu2QqbFajn6/UwAP3TnjW9KlR3tJnDRKmb0wTd4yXp2mBfGc5dq1y 30VlJiwcGbL/XRVG3HIKoN+5tLfHOypPMh584gpj0MrCsNQyFq3ybMDHoPzJNqJu0IauLs5IQZ4Xo /BxAu2JvLMTH+gY4hQPgk3XHQafA29AOTF7Jj6t/iJtFYSRETAgI6YKWtrnHbPql31X+YP/6dZQUS 2E8MaaeIvRoA9KPC1ZZ4L+5xjBKdikL9ThzNZ/Kr9GDCROiugk2V6cam0MnPXpfKirDQH3swIXk4Z FIeEYD8g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVOZ-00DYQy-4m; Mon, 14 Feb 2022 07:03:35 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVLF-00DWqC-GO for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:00:11 +0000 Received: by mail-pj1-x1049.google.com with SMTP id fh23-20020a17090b035700b001b9a9045bceso3511128pjb.8 for ; Sun, 13 Feb 2022 23:00:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AlRY+TVEbXq6jkXsggOYA+sdawrTQOSBDRRIDVsnbf4=; b=gOQtZBYK4/O+UGc2DlGN3H2S4bbXG+/Ysw+X1OToYjnAmANzwJ00NKQDaEden8R6Lt EIks1VH/ELQa/Evwz2khcecWxunjNUdoPyJnoDia7e0nOfCmfl2WIjUCiZO0dFohx7xu 3ffjl8dBUtMXu2aLvUFqnTrWUEjHQBhiBBUWcNVA07TaIPCjoTA5gVcC5Y2mid+IC2fQ 6nFH+MPVzCX2FZWa/afdcmzaWOzb9bxUiLX/EV2uHOW/5Md7QU9VLbOYf+sYlvpEo6Fx Z6mwN+G+TEcs0x0vCmQTseG3ZeRFY/0Rp8Xvw2NmKP13WhVZBcfzdd9LSrEF1kt51hvn 0vTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AlRY+TVEbXq6jkXsggOYA+sdawrTQOSBDRRIDVsnbf4=; b=pG8ZUtLQS690igjJU7KlwAJvKADllUWeFBFZVHwJ3k+lZf+UfO7Yt3W6WQbc1OTl/p SgBnpT8Qtt6S+xe1M/ChJ9sRYsMd9dTpl/XTJqpKmJmquiSWnyxtY4PI/W20ZCc/Rnjg nd0AXOMwcxqTJu39Xv0FQ2CeC+aoHCV2KWu264k3SbsI1xjD1Yvsc0BpF9Zzd4jn9qtv PU6/VqQnse8tDzvjHrlkGKDR7rOQ60MWMoDC9k+5LXEu6xguqRNeodDIsSXaoOQrKP/k 3ucpplS0c3ct0zkiat8KL5DXqW0DJ289AZwE6pTw+JEvWrZ79btcmFKlWxSExziPA/rg rksQ== X-Gm-Message-State: AOAM533sN7j6ouqWAnxew9T38yJ8swVi7UER+cI39mB8p98AYXCAwkj0 m0LUoO5uDht/DxdIF93bdS74pwASUqA= X-Google-Smtp-Source: ABdhPJwdP31lN+BaOgTpUFbEnsC6ugEKu7lGxqBZRuiZPsew2z5H4gOvrp4Hl8/IoytO1+AVbR2qYnOOFmI= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90a:e40f:: with SMTP id hv15mr1639878pjb.1.1644822008078; Sun, 13 Feb 2022 23:00:08 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:30 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-12-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 11/27] KVM: arm64: Make ID_AA64DFR0_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230009_587640_ED547EB2 X-CRM114-Status: GOOD ( 18.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds id_reg_info for ID_AA64DFR0_EL1 to make it writable by userspace. Return an error if userspace tries to set PMUVER field of the register to a value that conflicts with the PMU configuration. Since number of context-aware breakpoints must be no more than number of supported breakpoints according to Arm ARM, return an error if userspace tries to set CTX_CMPS field to such value. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 83 +++++++++++++++++++++++++++++++++------ 1 file changed, 71 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1c137f8c236f..ae379755fa26 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -594,6 +594,45 @@ static int validate_id_aa64mmfr1_el1(struct kvm_vcpu *vcpu, return 0; } +static bool id_reg_has_pmu(u64 val, u64 shift, unsigned int min) +{ + unsigned int pmu = cpuid_feature_extract_unsigned_field(val, shift); + + /* + * Treat IMPLEMENTATION DEFINED functionality as unimplemented for + * ID_AA64DFR0_EL1.PMUVer/ID_DFR0_EL1.PerfMon. + */ + if (pmu == 0xf) + pmu = 0; + + return (pmu >= min); +} + +static int validate_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + unsigned int brps, ctx_cmps; + bool vcpu_pmu, dfr0_pmu; + + brps = cpuid_feature_extract_unsigned_field(val, ID_AA64DFR0_BRPS_SHIFT); + ctx_cmps = cpuid_feature_extract_unsigned_field(val, ID_AA64DFR0_CTX_CMPS_SHIFT); + + /* + * Number of context-aware breakpoints can be no more than number of + * supported breakpoints. + */ + if (ctx_cmps > brps) + return -EINVAL; + + vcpu_pmu = kvm_vcpu_has_pmu(vcpu); + dfr0_pmu = id_reg_has_pmu(val, ID_AA64DFR0_PMUVER_SHIFT, ID_AA64DFR0_PMUVER_8_0); + /* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */ + if (vcpu_pmu ^ dfr0_pmu) + return -EPERM; + + return 0; +} + static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) { u64 limit = id_reg->vcpu_limit_val; @@ -637,8 +676,25 @@ static void init_id_aa64isar1_el1_info(struct id_reg_info *id_reg) id_reg->vcpu_limit_val &= ~PTRAUTH_MASK; } +static void init_id_aa64dfr0_el1_info(struct id_reg_info *id_reg) +{ + u64 limit = id_reg->vcpu_limit_val; + + /* Limit guests to PMUv3 for ARMv8.4 */ + limit = cpuid_feature_cap_perfmon_field(limit, ID_AA64DFR0_PMUVER_SHIFT, + ID_AA64DFR0_PMUVER_8_4); + /* Limit debug to ARMv8.0 */ + limit &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER); + limit |= (FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), 6)); + + /* Hide SPE from guests */ + limit &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER); + + id_reg->vcpu_limit_val = limit; +} + static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, - const struct id_reg_info *idr) + const struct id_reg_info *idr) { return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE); } @@ -655,6 +711,12 @@ static u64 vcpu_mask_id_aa64isar1_el1(const struct kvm_vcpu *vcpu, return vcpu_has_ptrauth(vcpu) ? 0 : PTRAUTH_MASK; } +static u64 vcpu_mask_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, + const struct id_reg_info *idr) +{ + return kvm_vcpu_has_pmu(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER); +} + static struct id_reg_info id_aa64pfr0_el1_info = { .sys_reg = SYS_ID_AA64PFR0_EL1, .ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), @@ -704,6 +766,13 @@ static struct id_reg_info id_aa64mmfr1_el1_info = { .validate = validate_id_aa64mmfr1_el1, }; +static struct id_reg_info id_aa64dfr0_el1_info = { + .sys_reg = SYS_ID_AA64DFR0_EL1, + .init = init_id_aa64dfr0_el1_info, + .validate = validate_id_aa64dfr0_el1, + .vcpu_mask = vcpu_mask_id_aa64dfr0_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -715,6 +784,7 @@ static struct id_reg_info id_aa64mmfr1_el1_info = { static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { [IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = &id_aa64pfr0_el1_info, [IDREG_IDX(SYS_ID_AA64PFR1_EL1)] = &id_aa64pfr1_el1_info, + [IDREG_IDX(SYS_ID_AA64DFR0_EL1)] = &id_aa64dfr0_el1_info, [IDREG_IDX(SYS_ID_AA64ISAR0_EL1)] = &id_aa64isar0_el1_info, [IDREG_IDX(SYS_ID_AA64ISAR1_EL1)] = &id_aa64isar1_el1_info, [IDREG_IDX(SYS_ID_AA64MMFR0_EL1)] = &id_aa64mmfr0_el1_info, @@ -1643,17 +1713,6 @@ static u64 __read_id_reg(const struct kvm_vcpu *vcpu, u32 id) val &= ~(id_reg->vcpu_mask(vcpu, id_reg)); switch (id) { - case SYS_ID_AA64DFR0_EL1: - /* Limit debug to ARMv8.0 */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER); - val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), 6); - /* Limit guests to PMUv3 for ARMv8.4 */ - val = cpuid_feature_cap_perfmon_field(val, - ID_AA64DFR0_PMUVER_SHIFT, - kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_PMUVER_8_4 : 0); - /* Hide SPE from guests */ - val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER); - break; case SYS_ID_DFR0_EL1: /* Limit guests to PMUv3 for ARMv8.4 */ val = cpuid_feature_cap_perfmon_field(val, From patchwork Mon Feb 14 06:57:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BDB55C433F5 for ; Mon, 14 Feb 2022 07:05:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=VF6C5YCOGsX4/RPkqYryZyBEfmATOAyjHzkA7YwsphE=; b=eM3/tSkqdbHHL8oMI+3qg3TuT+ BSgXsuRsiBfxmhm7s1PRkfX7Wl1dE/jJ1y6JdX1JFEHGvYswBTfiWR1adk4odHF9mYyx8csc5cxtP 7eB2sadKA4DSh4F0pdozLA6f9KyarieEDXWgIeMTpZKRP+o2x172wliH8CwAyhkrLf/CUcX0XNqpD e57kHl5tIsweq5G5fgPsHX63HyWEg5m7I20yZiX5suawPyASlODUonnb/G/z3PK4+y1PKP6lV94Zb vbpRusKyQXSONSlQNykM+r9FIxeuBqfG/SaTcC9892OZBrcOcVkO2RfgvWRNdSKKG4rGcLYzrykaY lJSo6ASw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVP3-00DYfs-DM; Mon, 14 Feb 2022 07:04:05 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVLR-00DWwp-U8 for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:00:23 +0000 Received: by mail-pj1-x104a.google.com with SMTP id h16-20020a17090ac39000b001b8d02b2efaso11517091pjt.5 for ; Sun, 13 Feb 2022 23:00:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3YXTeyXBEmNCdPDjiReBLqzjLxovi4NAJDqchxZh9cs=; b=hqjIH2MDABDU/OeLEKZWo+DdKFtbNKPzc8DzAsmSoewDb0TQvvmNYfZoqr4LRPXuCu 3/MsTvJul6Ij9zh2P/rEwbxSoWxJwP9B3mBPhm7La4N4RoGaWILY7KjdW5Ujj+uLPPvm M8Rne1sT9cGZvK0KeTNOmjBlWuqyGQ7lIqjcUAy21IrRnsS/u00py3MfG7ueSilWBRKe SG+Y7xvtyl4UaohTa/VoGlSW5XHwO4NES1aGnE+cCOEbkyF3DSgDWWRVfIcu/KgBuFdE 5GTsbRB0iQ3R9rrjRdgHrJVZ7GoLfXXoxQVXuhy5kcR6limdJhuizj5foMhZAqnFY7pX 7jPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3YXTeyXBEmNCdPDjiReBLqzjLxovi4NAJDqchxZh9cs=; b=Gmxggoc+4AbY4fG5hCoUF3qZbfpyIoHinYy6Rj4JGhNCH6LkyItgnQj62SC+1ThpU9 05lQo8dQzA2wSDZIw28l9Td9/bb52/LRVu9CacWXhY9JH6aLdi+ZvYj2a4tkfbdbRtlf Bp8UfdsNXTcXif5XWo/+JbKkdxcbpFbBPxSnspE0gTW2Cw4SO2Tz2EL4HvZOmme3blnm zZqGT2oUnGCpyZW23McQr+6WxWfQs4ePROz7OnD8YgyRAD1pAM1hDlWIi89TTg8ngbA8 21CYXWndzWqTXhgHdZyzlGpJoOFJU3GVHgPxCUP5ksYnqkUjPctvib5322sxGF1JiHYF XMiA== X-Gm-Message-State: AOAM5335+8CZNYzVSroajYomGPu6hL49f6fT7ZmYG7wIR3yR2uL30DW6 bQa9XslW9FBmJrXYEJTbhvQqfEFWe2A= X-Google-Smtp-Source: ABdhPJyq9BwnHRiqYEmw3Jv6ypDLwMjFgJah6jMEDc6CbrOMpkBYC8Pq1R+X3qit7EcVAKLFGgrEBsgt/Rk= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a63:368f:: with SMTP id d137mr10488057pga.475.1644822020701; Sun, 13 Feb 2022 23:00:20 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:31 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-13-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 12/27] KVM: arm64: Make ID_DFR0_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230022_053279_496DBF6B X-CRM114-Status: GOOD ( 16.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds id_reg_info for ID_DFR0_EL1 to make it writable by userspace. Return an error if userspace tries to set PerfMon field of the register to a value that conflicts with the PMU configuration. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 55 ++++++++++++++++++++++++++++++++------- 1 file changed, 45 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index ae379755fa26..90e6a85d4e31 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -633,6 +633,27 @@ static int validate_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, return 0; } +static int validate_id_dfr0_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + bool vcpu_pmu, dfr0_pmu; + unsigned int perfmon; + + perfmon = cpuid_feature_extract_unsigned_field(val, ID_DFR0_PERFMON_SHIFT); + if (perfmon == 1 || perfmon == 2) + /* PMUv1 or PMUv2 is not allowed on ARMv8. */ + return -EINVAL; + + vcpu_pmu = kvm_vcpu_has_pmu(vcpu); + dfr0_pmu = id_reg_has_pmu(val, ID_DFR0_PERFMON_SHIFT, ID_DFR0_PERFMON_8_0); + + /* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */ + if (vcpu_pmu ^ dfr0_pmu) + return -EPERM; + + return 0; +} + static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) { u64 limit = id_reg->vcpu_limit_val; @@ -693,8 +714,17 @@ static void init_id_aa64dfr0_el1_info(struct id_reg_info *id_reg) id_reg->vcpu_limit_val = limit; } +static void init_id_dfr0_el1_info(struct id_reg_info *id_reg) +{ + /* Limit guests to PMUv3 for ARMv8.4 */ + id_reg->vcpu_limit_val = + cpuid_feature_cap_perfmon_field(id_reg->vcpu_limit_val, + ID_DFR0_PERFMON_SHIFT, + ID_DFR0_PERFMON_8_4); +} + static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, - const struct id_reg_info *idr) + const struct id_reg_info *idr) { return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE); } @@ -717,6 +747,12 @@ static u64 vcpu_mask_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, return kvm_vcpu_has_pmu(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER); } +static u64 vcpu_mask_id_dfr0_el1(const struct kvm_vcpu *vcpu, + const struct id_reg_info *idr) +{ + return kvm_vcpu_has_pmu(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_DFR0_PERFMON); +} + static struct id_reg_info id_aa64pfr0_el1_info = { .sys_reg = SYS_ID_AA64PFR0_EL1, .ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), @@ -773,6 +809,13 @@ static struct id_reg_info id_aa64dfr0_el1_info = { .vcpu_mask = vcpu_mask_id_aa64dfr0_el1, }; +static struct id_reg_info id_dfr0_el1_info = { + .sys_reg = SYS_ID_DFR0_EL1, + .init = init_id_dfr0_el1_info, + .validate = validate_id_dfr0_el1, + .vcpu_mask = vcpu_mask_id_dfr0_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -782,6 +825,7 @@ static struct id_reg_info id_aa64dfr0_el1_info = { */ #define GET_ID_REG_INFO(id) (id_reg_info_table[IDREG_IDX(id)]) static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { + [IDREG_IDX(SYS_ID_DFR0_EL1)] = &id_dfr0_el1_info, [IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = &id_aa64pfr0_el1_info, [IDREG_IDX(SYS_ID_AA64PFR1_EL1)] = &id_aa64pfr1_el1_info, [IDREG_IDX(SYS_ID_AA64DFR0_EL1)] = &id_aa64dfr0_el1_info, @@ -1712,15 +1756,6 @@ static u64 __read_id_reg(const struct kvm_vcpu *vcpu, u32 id) /* Clear fields for opt-in features that are not configured. */ val &= ~(id_reg->vcpu_mask(vcpu, id_reg)); - switch (id) { - case SYS_ID_DFR0_EL1: - /* Limit guests to PMUv3 for ARMv8.4 */ - val = cpuid_feature_cap_perfmon_field(val, - ID_DFR0_PERFMON_SHIFT, - kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0); - break; - } - return val; } From patchwork Mon Feb 14 06:57:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C69B9C433EF for ; Mon, 14 Feb 2022 07:06:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=PGKrVDibYRnjkltCdQmGtEEedR201txld7x9VBWTI+I=; b=Fr/vMMOJaaKt+iyLts0gG31Hpx w6zTOQJwAmlPEbihj0fLGTz5Wci4ytPtO0u9YIkHkKugt7fRmlMSD4uu3aD/D4YcjUmnbEGFaJNOC ottDLZ1c99B/mxZB78/sMaY5rXRgloiliKp78cxHhbXcZ0Tem07HbfNIKbLkz38+aFHLo39qIScML MunZp2PDyc4PRyqHKEtGwedrlK9NmRZWmhWY9sYFWYqAewnDJ0+635PmirZZg2T/7BmradJKaGl6O Wksx20p+saNHqV8ZNBQKuP8LO3WiWxdExwrUdhkBCxVb1PkNaNXLkaqIg3lZlP6rPQBlAKVrsXh+c 8DINu1ag==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVPo-00DYy0-Bt; Mon, 14 Feb 2022 07:04:53 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVLb-00DX1n-6Q for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:00:32 +0000 Received: by mail-pl1-x64a.google.com with SMTP id e14-20020a170902ed8e00b0014d57a73462so5775560plj.15 for ; Sun, 13 Feb 2022 23:00:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bOvKI1kPCaNNGWhIjOWMoR69tGeqR4WLISP2Q7llMqI=; b=csOB7hi1VBA/oFDOyjQMZYQFqlg8l/OnmxylmJhek12832NT8JhFn5Phv7C6mUHwbp AKruxmYtN8nVv4ced2ng2XspMYt/cavaRDSE9kd8AsUuTGyjGmcIqUzihAgLo9HcDuB6 BKZZ4iza/sROO2ChnPsyCtEyqhIstwciWY1Bs1iz9w93e+WCzWDG9+wgT9g3NrJi2m5o 2mXJwZC6t+JxapfS4ExaBSlUMRQEj9sBcrQlQl75AxtlYa/XH9WUCCSZVc9vt/uzSogG YSlwNIZ0jXx8QSNce60sNbxG5PzHlTEWVEqjb7+D4psXrlSiEDExSYtKX1EolSUDlfmc EYWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bOvKI1kPCaNNGWhIjOWMoR69tGeqR4WLISP2Q7llMqI=; b=agoBznPYAgWc+yqRk2ej/FkmYRAIHobmem5pnUOEf8eELoLPkmV7bNbSXCXyJiyUKI QtboDSCT1glkyAncG0XnJMNkjKxz/FxYWRd6JJP6MmdwYYO4pqTM+gM5iOZTrPQ7+zFo POpzr0rKesXFQCpSBk0f3BGtneM3yszC8jFgmUzNdHYvtevvpyWlEYA1889pkA4S0tuH Dha4ESfABiIETrwDoyCGAg3XcAHaEpkddcEVD7agnofaJFsANYlAYPhDfEE01gEdXPJI hnKSmyWNtNsqy5XdNejj05KUZPaK6/YUbTqH4rOA2uIQS9a/vqFzG9P4wbEUyKpsozdd UdGA== X-Gm-Message-State: AOAM530MGaSJEcVudTtHXgpkuFk2d388dJGqkCqejvNsSO6aDfZK1dxy YEeSMSN3rxbCDLzGmoJr4lc9noWYAEs= X-Google-Smtp-Source: ABdhPJx0j283c3uteG5vU/Wm7GzzzV9S4GvLk6zEMwainkGx3b9r3oXjbaFonEE5MPqvOlYgJbXjZ0p1T3E= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:aa7:9ad0:: with SMTP id x16mr12998774pfp.55.1644822030158; Sun, 13 Feb 2022 23:00:30 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:32 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-14-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 13/27] KVM: arm64: Make MVFR1_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230031_281671_E3F66D97 X-CRM114-Status: GOOD ( 16.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds id_reg_info for MVFR1_EL1 to make it writable by userspace. There are only a few valid combinations of values that can be set for FPHP and SIMDHP fields according to Arm ARM. Return an error when userspace tries to set those fields to values that don't match any of the valid combinations. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 90e6a85d4e31..fea7b49018b2 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -654,6 +654,36 @@ static int validate_id_dfr0_el1(struct kvm_vcpu *vcpu, return 0; } +static int validate_mvfr1_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + unsigned int fphp, simdhp; + struct fphp_simdhp { + unsigned int fphp; + unsigned int simdhp; + }; + /* Permitted fphp/simdhp value combinations according to Arm ARM */ + struct fphp_simdhp valid_fphp_simdhp[3] = {{0, 0}, {2, 1}, {3, 2}}; + int i; + bool is_valid_fphp_simdhp = false; + + fphp = cpuid_feature_extract_unsigned_field(val, MVFR1_FPHP_SHIFT); + simdhp = cpuid_feature_extract_unsigned_field(val, MVFR1_SIMDHP_SHIFT); + + for (i = 0; i < ARRAY_SIZE(valid_fphp_simdhp); i++) { + if (valid_fphp_simdhp[i].fphp == fphp && + valid_fphp_simdhp[i].simdhp == simdhp) { + is_valid_fphp_simdhp = true; + break; + } + } + + if (!is_valid_fphp_simdhp) + return -EINVAL; + + return 0; +} + static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) { u64 limit = id_reg->vcpu_limit_val; @@ -816,6 +846,11 @@ static struct id_reg_info id_dfr0_el1_info = { .vcpu_mask = vcpu_mask_id_dfr0_el1, }; +static struct id_reg_info mvfr1_el1_info = { + .sys_reg = SYS_MVFR1_EL1, + .validate = validate_mvfr1_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -826,6 +861,7 @@ static struct id_reg_info id_dfr0_el1_info = { #define GET_ID_REG_INFO(id) (id_reg_info_table[IDREG_IDX(id)]) static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { [IDREG_IDX(SYS_ID_DFR0_EL1)] = &id_dfr0_el1_info, + [IDREG_IDX(SYS_MVFR1_EL1)] = &mvfr1_el1_info, [IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = &id_aa64pfr0_el1_info, [IDREG_IDX(SYS_ID_AA64PFR1_EL1)] = &id_aa64pfr1_el1_info, [IDREG_IDX(SYS_ID_AA64DFR0_EL1)] = &id_aa64dfr0_el1_info, From patchwork Mon Feb 14 06:57:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745050 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CE147C433EF for ; Mon, 14 Feb 2022 07:07:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=VSqgpk69b4ezwPj/DI1OHDMkVmpKZgpORIvf7O6fOwg=; b=eCjMxbODK6mscXBIJNV1fNYVIF /s7lZ+6ZnX2s3JSk2RK0N1sqz8p5NBqtww2d5ZIKH5uvU6srdOVQOp596U0Oj6OhVA4DbgYugiHPl qMbMdCzS91dyoFuQqPIo7M+dfR4vb+O2r8p0+CEN3+/MoB8gi7qNIKgDMP/uycf8M/xvMrkdxMqag ut1TZ3U91xICtpBULwwezK0oMRNqbs2IXtFWft/nglZvAb3OeyplXTWAc7HE3mvou04+88ZLJVS0X WmFkU9hZyb1UzBYo3seXKuiocGmRtAaygxmkdoXEFo1aysXR+onlX4DicSInYg8dw+vlxXS7mOpWI v2KREDog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVQc-00DZHv-OU; Mon, 14 Feb 2022 07:05:43 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVLl-00DX6N-Hh for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:00:43 +0000 Received: by mail-pj1-x1049.google.com with SMTP id s10-20020a17090a948a00b001b96be201f6so6205454pjo.4 for ; Sun, 13 Feb 2022 23:00:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zPnMHKVlqcgDQB6kXKJ7+KuQAdNLH0y013yuxCMh/uc=; b=jAMpDPE2rwLv0xUEZh0rf99E5rltD6fSrYSdLKH9MDqodxvRCPt1zuOb6B9b1nSoMA LOc7lWlmOcHkEb3/SBDlw6Q7xWUkfRgOZ1/NqJWEvzc5aDDmO8WA4sfAwP6macnQqeyH r6q9dAjHDQScFr8QtjWa/2fYmOShpUA+EwsRzEeG57GB3ibYFUFWOAF6mvoXUDAVkyA5 qdH+/5VtcQPN2f26W69X4y+bwUZEa/LdHQgNuUtcEPiB3ZcsOsInUXYrSf0CqniMr8OB g3UniLutFzrP920Ugjw95OH7nvWvChiNn1a4eZS9a7keu0rnuQTJxjcx3A2k/2SX70Ay xBiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zPnMHKVlqcgDQB6kXKJ7+KuQAdNLH0y013yuxCMh/uc=; b=UKbae1RQsumWsnc2hzrI/1pkV3CtnahwoedVgzu0h2Khutd+aIDdxRs8vlVjLnzp5I soF4aK00YSkgT2PYd8aa3X4Fz1pb3jOZwOTVB+MLzTUfFEB1kznRxVqtXOuUOkuaqnbG qCHGmicD2KmpPiRVhRBhFuCcfXfUn/gVPKSfaLfy0g6tCYa82DT8EtFk/aYQ6QvUhNtW WIXW0LSIxmPJzD2s0rDO06oQ7A4o7vRcLSUiLoINoD8Z3RtU7CjvqDtevMQYBeQrTNcp 2h2GlKyUk60hoxSkjhf+2f5Ujkmv7BWHn9oEHd6zPCuZiamBdBYMBq03TqsBh81lQdyU /6pg== X-Gm-Message-State: AOAM532OlcsynKDd4HyHEOljHYesUfXzpzhlnXP4p9F0oKAMm9SavwX1 US2wX9YMvaGdoP2AA0ccv9haM7bH1AQ= X-Google-Smtp-Source: ABdhPJyJ0YhcLpxpXD9Q6qbj2a2Z8CEceYHauRoXxBka/sS6SnTbMSEqpjKMgS7gdS6Ry14yGjGjZ3cqwj8= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:903:11d0:: with SMTP id q16mr12831689plh.134.1644822040248; Sun, 13 Feb 2022 23:00:40 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:33 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-15-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 14/27] KVM: arm64: Make ID registers without id_reg_info writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230041_634530_D099225E X-CRM114-Status: GOOD ( 13.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Make ID registers that don't have id_reg_info writable. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index fea7b49018b2..9d7041a10b41 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1909,11 +1909,8 @@ static int __set_id_reg(struct kvm_vcpu *vcpu, /* The value is same as the current value. Nothing to do. */ return 0; - /* - * Don't allow to modify the register's value if the register is raz, - * or the reg doesn't have the id_reg_info. - */ - if (raz || !GET_ID_REG_INFO(encoding)) + /* Don't allow to modify the register's value if the register is raz. */ + if (raz) return -EINVAL; /* From patchwork Mon Feb 14 06:57:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745051 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F2A2EC433EF for ; Mon, 14 Feb 2022 07:08:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=leJlTmGrsWHeQxvyir3cUE5ySOi51D4LY+yw5kWipvI=; b=ja7miweM1uOqHF3+t+xj6w38hK ZSF7rJ6ADcEOyoIQceO6GIvzj0+ivs7wUdmJHlTw/Tw7+VYjGVHBub+mv0vJeq4KcWKBdrQj5Fo9j csnaUfY/tpOjXQXOSq7Gy32Fd7dVl6cOtrgMMx7igWFiVtdn/9Q7YWs1EZVaA19DtrdgIk7EvzBoJ G45bU6FGyRxvMkO2KOgztuIfovv3MUGx56BPkw3AlCkR6UISSicZdORiFiUp/58IFnSFRWme+Q6es rLhwDBJn3HEagg+EXOYu/pj1aUr9sIiJ7XiHJ7mgVigmh7TDVBbGI45lQSaV2w0QYlGiQDM82MQz2 FOFgh2iQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVRi-00DZje-0S; Mon, 14 Feb 2022 07:06:50 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVLt-00DXAs-ER for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:00:51 +0000 Received: by mail-yb1-xb49.google.com with SMTP id s129-20020a254587000000b00621cf68a92fso10635696yba.13 for ; Sun, 13 Feb 2022 23:00:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Rhm0G4Hndfns0WwS5RiUqsnxD09RQ4PSWOitjVcchSA=; b=hMB7PKfeq4wP8cmvwOKAPsq8tMblTDA9ipBzD4QqMrvaqHeFOtd4qd2QujezGEmhjy drmVsLzZ98LR3Ay3+/d0fcBLocjvZsoDen0n80H8V77zYY8nJgoSD13SEETmzOfZHT/r 7DnXAb2ILeeorZwG0F5e435FUvXz118Mjwtitjl2GXMlOSUZyHrLD42zzZHdT8m6C4qY 4U7zQfFkL/GUuOIN2LldrYMRhbKo9oROypdAGqiUlhXG5bcmXYKqugsL+Icjrh1IFE+5 wMVjqarY+5xMoB39RlQcJD4OoUPtj173ospvYKmzx0FkEdEHBigzbUrKDjOks1BbTbyt cDSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Rhm0G4Hndfns0WwS5RiUqsnxD09RQ4PSWOitjVcchSA=; b=zOVjaNWhCp65po4GgttvkNEhRlI9trv0msFlI8ZuHgutgf9s82LcI52r9ZXk8+6Xjf nhidddstYGajGJ4gbrs6wN6PjP0uqMQBPZmqXw8Mf9zm+BiTUeza3e0T6ceCzwlOyUF0 fv8Aj/wwvJ2Tg5l5sFjD7RT4jvMxjkTFWU7XXvR15L0flUmG0K7XOauu3iWdjsKvZ51E Ckb6lkjmt4dUpsudY6ZHNoNSnN/SuL89P6hslEyCF1slsPKaPvpCZkD3vfZbkcYVxHUJ 05ipIGYC35IOsvR49cHRBl+Mcqqxav5hCZXosomODBER4SlvFYYa9CHRngx8/6rTWUuU Woaw== X-Gm-Message-State: AOAM532p95V/GCv9WEiCJ11QRNLU0bdX0IuPd1SRwk5qiAe/mli6W292 1r1diki/UnDVUbYoDj3bq2Ctb10wnlM= X-Google-Smtp-Source: ABdhPJwr6+bnTjWiqQtguZ4ziJGjLXvQhPLKCUpg9BP1fWujKsSOpVQ18DxW9zhbhpZl5zvE1334R+e8vtc= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a25:af13:: with SMTP id a19mr10423130ybh.543.1644822047431; Sun, 13 Feb 2022 23:00:47 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:34 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-16-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 15/27] KVM: arm64: Add consistency checking for frac fields of ID registers From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230049_518345_00CA2500 X-CRM114-Status: GOOD ( 20.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Feature fractional field of an ID register cannot be simply validated at KVM_SET_ONE_REG because its validity depends on its (main) feature field value, which could be in a different ID register (and might be set later). Validate fractional fields at the first KVM_RUN instead. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/arm.c | 3 + arch/arm64/kvm/sys_regs.c | 115 +++++++++++++++++++++++++++++- 3 files changed, 116 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9ffe6604a58a..5e53102a1ac1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -748,6 +748,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, void set_default_id_regs(struct kvm *kvm); int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval); +int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu); /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 91110d996ed6..e7dcc7704302 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -599,6 +599,9 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) if (likely(vcpu_has_run_once(vcpu))) return 0; + if (!kvm_vm_is_protected(kvm) && kvm_id_regs_check_frac_fields(vcpu)) + return -EPERM; + kvm_arm_vcpu_init_debug(vcpu); if (likely(irqchip_in_kernel(kvm))) { diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 9d7041a10b41..b7329075a69f 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -793,9 +793,6 @@ static struct id_reg_info id_aa64pfr0_el1_info = { static struct id_reg_info id_aa64pfr1_el1_info = { .sys_reg = SYS_ID_AA64PFR1_EL1, - .ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR1_RASFRAC) | - ARM64_FEATURE_MASK(ID_AA64PFR1_MPAMFRAC) | - ARM64_FEATURE_MASK(ID_AA64PFR1_CSV2FRAC), .init = init_id_aa64pfr1_el1_info, .validate = validate_id_aa64pfr1_el1, .vcpu_mask = vcpu_mask_id_aa64pfr1_el1, @@ -3484,10 +3481,108 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) return write_demux_regids(uindices); } +/* ID register's fractional field information with its feature field. */ +struct feature_frac { + u32 id; + u32 shift; + u32 frac_id; + u32 frac_shift; +}; + +static struct feature_frac feature_frac_table[] = { + { + .frac_id = SYS_ID_AA64PFR1_EL1, + .frac_shift = ID_AA64PFR1_RASFRAC_SHIFT, + .id = SYS_ID_AA64PFR0_EL1, + .shift = ID_AA64PFR0_RAS_SHIFT, + }, + { + .frac_id = SYS_ID_AA64PFR1_EL1, + .frac_shift = ID_AA64PFR1_MPAMFRAC_SHIFT, + .id = SYS_ID_AA64PFR0_EL1, + .shift = ID_AA64PFR0_MPAM_SHIFT, + }, + { + .frac_id = SYS_ID_AA64PFR1_EL1, + .frac_shift = ID_AA64PFR1_CSV2FRAC_SHIFT, + .id = SYS_ID_AA64PFR0_EL1, + .shift = ID_AA64PFR0_CSV2_SHIFT, + }, +}; + +/* + * Return non-zero if the feature/fractional fields pair are not + * supported. Return zero otherwise. + * This function validates only the fractional feature field, + * and relies on the fact the feature field is validated before + * through arm64_check_features_kvm. + */ +static int vcpu_id_reg_feature_frac_check(const struct kvm_vcpu *vcpu, + const struct feature_frac *ftr_frac) +{ + const struct id_reg_info *id_reg; + u32 id; + u64 val, lim, mask; + + /* Check if the feature field value is same as the limit */ + id = ftr_frac->id; + id_reg = GET_ID_REG_INFO(id); + + mask = (u64)ARM64_FEATURE_FIELD_MASK << ftr_frac->shift; + val = __read_id_reg(vcpu, id) & mask; + lim = id_reg ? id_reg->vcpu_limit_val : read_sanitised_ftr_reg(id); + lim &= mask; + + if (val != lim) + /* + * The feature level is lower than the limit. + * Any fractional version should be fine. + */ + return 0; + + /* Check the fractional feature field */ + id = ftr_frac->frac_id; + id_reg = GET_ID_REG_INFO(id); + + mask = (u64)ARM64_FEATURE_FIELD_MASK << ftr_frac->frac_shift; + val = __read_id_reg(vcpu, id) & mask; + lim = id_reg ? id_reg->vcpu_limit_val : read_sanitised_ftr_reg(id); + lim &= mask; + + if (val == lim) + /* + * Both the feature and fractional fields are the same + * as limit. + */ + return 0; + + return arm64_check_features_kvm(id, val, lim); +} + +int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu) +{ + int i, err; + const struct feature_frac *frac; + + /* + * Check ID registers' fractional fields, which aren't checked + * at KVM_SET_ONE_REG. + */ + for (i = 0; i < ARRAY_SIZE(feature_frac_table); i++) { + frac = &feature_frac_table[i]; + err = vcpu_id_reg_feature_frac_check(vcpu, frac); + if (err) + return err; + } + return 0; +} + static void id_reg_info_init_all(void) { int i; struct id_reg_info *id_reg; + struct feature_frac *frac; + u64 ftr_mask = ARM64_FEATURE_FIELD_MASK; for (i = 0; i < ARRAY_SIZE(id_reg_info_table); i++) { id_reg = (struct id_reg_info *)id_reg_info_table[i]; @@ -3496,6 +3591,20 @@ static void id_reg_info_init_all(void) id_reg_info_init(id_reg); } + + /* + * Update ignore_mask of ID registers based on fractional fields + * information. Any ID register that have fractional fields + * is expected to have its own id_reg_info. + */ + for (i = 0; i < ARRAY_SIZE(feature_frac_table); i++) { + frac = &feature_frac_table[i]; + id_reg = GET_ID_REG_INFO(frac->frac_id); + if (WARN_ON_ONCE(!id_reg)) + continue; + + id_reg->ignore_mask |= ftr_mask << frac->frac_shift; + } } void kvm_sys_reg_table_init(void) From patchwork Mon Feb 14 06:57:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1937C433F5 for ; Mon, 14 Feb 2022 07:09:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=MBgmuwQVjXuxIl2fuZ0N31NGkM8dgd2TH+2WNA3ZZYE=; b=wwyBY9K/U6DQ4t6E64I+98WAR6 L+rH3uPJc47iYgMxD4diie5t8D2sbWNI6NOjKp7kebXXc0IiFXhk5B3KVGdUCmgJAWBEipgfLyFe+ A3i5wtMUkUcL5U8wpvzRCYOqzXUCYJfPJ1sjol+WO6yPkwN13LTAVdEJLxsN0cDZ5td3VFrXuCbg7 wgYNzcBU6TF+6ic+n5g7bPtlGDFBJmREMM7bRxIEuuv3RQDQAhcMISQ3vC+Geqc7ZkomTbGAQOMuz +OAUiu6x3dUT7oOPfnmvVPRwXBM5cugf9bfat/+adyMrXLP4DPnrc34QoiBoS72EkU0r3To6oNlRQ dAzuxBuw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVSl-00Da9A-9y; Mon, 14 Feb 2022 07:07:55 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVM1-00DXFn-1L for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:00:59 +0000 Received: by mail-pj1-x104a.google.com with SMTP id jf17-20020a17090b175100b001b90cf26a4eso11534089pjb.3 for ; Sun, 13 Feb 2022 23:00:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=LqqSQ+pfjPkcBMVZaT7DXOpSryezslmtHzw2gvxdUKE=; b=BXNGQJEPXj526bgkijx0svS6AKh+wH5vz+CbiZqi9LTPYrppZNqH1xhYeJcxPJ5KmD fLdIRkEuCi8EKF+K6RLiw0e7mv4Zn8WsF/aWGLoL/NQJP3thK4ZeeCZVnDZ5DiTkKtyh CB16pqTyBC0mYbZbgg7MY4vqc0YSPHcLL1B/pPU3SMrbo90o8frKVSPeqB1UR3/gCfdi 6WeqIw7fNOXxjZ1N3MpkxxiG1jx9t7gSBJmh910trAu4csbKLiS9cO+nnv3tl70rPFgs ROEn0N+WhhmpRsZJLhuJEp0M7OD90CUJtgic/1vGeRE8MgxoOfAcDDAviTAS3aXiXn0s K9+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LqqSQ+pfjPkcBMVZaT7DXOpSryezslmtHzw2gvxdUKE=; b=Y37uKzwTrpnb7UJOi8w4d/clOEtPmRJklaj+TygHn8bJOi6zMYaBpOnwkQIsF6EQmA qvKnuhyUIWHx53BMp9Pt/PgZ267DCfpg8qtwkGnIJ7T3Wrpc3Koplrj+q3Gn+snjs0O/ 7ZEp8Byn7xUhedZLcQtIXOONl1II/glRuq1Oi1UkWShsP8+VMpUGakuJed/maiwoGzdb tw3WfCD06GjHekUpVXejLZooeCKn5yJapYYSaS+ceLVBctLVGHmH9pCLJmMBgXc1Af8g 77Fcm5Y6OD6KfU4BpHSj/O/VQy+SFdwRMZyBp6NSK0RZsHFfBrGz+C4cBBCO2jBE+5E7 SNlA== X-Gm-Message-State: AOAM53305V0RWtCLpKqLM0c9twnGXoBmM3plLyLCsEne/SjxTYqYXiLO 9WtcqnwpLhgtEMdZJjVRpDGPdL6it8I= X-Google-Smtp-Source: ABdhPJz6IFXa9hWbUTsAZTdoeoQlMhXZJjw8CsklsTYlXwId7cdlTGLWp5HcrUaCg10o3Cb+CSmqrI+AFz0= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a63:a1f:: with SMTP id 31mr4796173pgk.79.1644822055226; Sun, 13 Feb 2022 23:00:55 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:35 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-17-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 16/27] KVM: arm64: Introduce KVM_CAP_ARM_ID_REG_CONFIGURABLE capability From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230057_153658_B1CE7AC8 X-CRM114-Status: GOOD ( 10.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a new capability KVM_CAP_ARM_ID_REG_CONFIGURABLE to indicate that ID registers are writable by userspace. Signed-off-by: Reiji Watanabe --- Documentation/virt/kvm/api.rst | 12 ++++++++++++ arch/arm64/kvm/arm.c | 1 + include/uapi/linux/kvm.h | 1 + 3 files changed, 14 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index a4267104db50..901ab8486189 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -2607,6 +2607,10 @@ EINVAL. After the vcpu's SVE configuration is finalized, further attempts to write this register will fail with EPERM. +The arm64 ID registers (where Op0=3, Op1=0, CRn=0, 0<=CRm<8, 0<=Op2<8) +are allowed to set by userspace if KVM_CAP_ARM_ID_REG_CONFIGURABLE is +available. They become immutable after calling KVM_RUN on any of the +vcpus in the guest (modifying values of those registers will fail). MIPS registers are mapped using the lower 32 bits. The upper 16 of that is the register group type: @@ -7561,3 +7565,11 @@ The argument to KVM_ENABLE_CAP is also a bitmask, and must be a subset of the result of KVM_CHECK_EXTENSION. KVM will forward to userspace the hypercalls whose corresponding bit is in the argument, and return ENOSYS for the others. + +8.35 KVM_CAP_ARM_ID_REG_CONFIGURABLE +------------------------------------ + +:Architectures: arm64 + +This capability indicates that userspace can modify the ID registers +via KVM_SET_ONE_REG ioctl. diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index e7dcc7704302..68ffced5b09e 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -210,6 +210,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_SET_GUEST_DEBUG: case KVM_CAP_VCPU_ATTRIBUTES: case KVM_CAP_PTP_KVM: + case KVM_CAP_ARM_ID_REG_CONFIGURABLE: r = 1; break; case KVM_CAP_SET_GUEST_DEBUG2: diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 5191b57e1562..6e09f2c2c0c1 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1134,6 +1134,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_VM_GPA_BITS 207 #define KVM_CAP_XSAVE2 208 #define KVM_CAP_SYS_ATTRIBUTES 209 +#define KVM_CAP_ARM_ID_REG_CONFIGURABLE 210 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Mon Feb 14 06:57:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3EA62C433F5 for ; Mon, 14 Feb 2022 07:12:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Ri60QVd1tmIjykimkvspA4wiz6Ek3AS5DYyLxidK75Y=; b=KtDH7B/BfrzQEiY+ppP9e0Q3iH nVn1+3eLlGxzaDKBZRcPsHZQ+DDo7ooUuYFfOOoOtE5H8jJ8vaDRE2lmDGAuqfDXXLnqCxiKqQGcU kjyfi1+qfimSlzHUMdga0VanqK+Fo7F6OpvYVRe1/4RzBwJoAnahyCaaWWzGptWllFzQU8WvkBMLX xJD8EVhraG+8a6P82eMCI4NCslzwUbrD47yHzfrvfouKiFp+6IUs2XBSq+q2SEgNNAPVGZP34LK/k /deIjAySXG9YNZUgx6jTcoK0Qjg0Uf57dd7MtA2hWI031wmrdaZsivxHMzr6kUZpjd6JTrhhV9yMb 9N0Ct1xA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVUu-00Dax9-0W; Mon, 14 Feb 2022 07:10:09 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVM7-00DXIJ-Fv for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:01:08 +0000 Received: by mail-pl1-x649.google.com with SMTP id v20-20020a1709028d9400b0014ca63d0f10so5790423plo.5 for ; Sun, 13 Feb 2022 23:01:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=a/yuK2O/Re7Co2Io4cdVlJTyMwtdZLZkr8qZ0mg39fY=; b=DEs5aERxxouyGvYTAnCFcl4JtaKrStkjD147SgQbHMxZfk/mxYA1lUocOz0JtRZCcO /DPTt0Cxcj0WTjEHWspY3rKu+rSlnOqedTIN4eLSTboAlacvVsSLQIEkGh1v+9kUohT3 Pibn6qnRA89Ahwrgq3+EUNNhGzySlNgjYfHXrL/wRxMjBDQKLViShmzppFURRJdOo/JA vcymPPh4Zg+zwaGwJ1SXA/HnY7YrQf9V6BDH7oITObR2TLwI9TnLZPAr7bUdDsLBRreR KIeaJwe6WQgvNPfILbpKJk1uP7kpGZofuKt16Psus71TAEjtWlJbyR6sNnGrRpzJXqqU MMxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=a/yuK2O/Re7Co2Io4cdVlJTyMwtdZLZkr8qZ0mg39fY=; b=2PqEjlzTmZ/BCyKmhjoXBLQfe5oGIRyY2h8Arfpoz77BkTAYGCpDjCTWa7U1xR2qhh 7xmOK3KWnC6q26dSrQIeFL5Yd3sRjQ4Z1+R8V0CSjSpmN2uBUhLXLcIt489cDh+x8Xio Cy0/ON477zTN50vzhJLgx9GwJUxRT+P0NT7Kw3pSHWMhDdxV3nSnQ3Oo6nju/qF3GeZt kpF0XZmiboMneN7xSQRw8LWA5e9JMSNfHGjQg1eW90RAbxG+bxOkguYTPTCeAUH826h3 JcyYQNNAWlkdIvNsUth/Ip3ImD3riDMnSGfAHL0txJeoM/o5EUbl1aqIG3gEfWf/ljAN +VSA== X-Gm-Message-State: AOAM5323+OuFfyCqcqHeg2dIvGfAbSlMjAzqlHqTEUJhJMTBZvkBnvtA fpqdDci9dsa3DUxZQe6eNk6QeBYqfxs= X-Google-Smtp-Source: ABdhPJzwMoiMLygZXRL9DWPsk4zOABdUgqc/AmJEZ6NkalQtrNqKVtTHQga2BCCTb2+vg3XvCeds31HG5kI= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90a:e40f:: with SMTP id hv15mr1640080pjb.1.1644822059803; Sun, 13 Feb 2022 23:00:59 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:36 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-18-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 17/27] KVM: arm64: Add kunit test for ID register validation From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230103_909015_F085D06D X-CRM114-Status: GOOD ( 24.77 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add kunit tests for functions that are used for validation of ID registers, CONFIG_KVM_KUNIT_TEST option to enable the tests, and .kunitconfig to run the kunit tests. One line change below is needed in the default arm64.py to fully run all of those kunit tests. ----------------------------------------------------------------------- $ diff tools/testing/kunit/qemu_configs/arm64.py arm64_kvm_min.py 12c12 < extra_qemu_params=['-machine virt', '-cpu cortex-a57']) --- > extra_qemu_params=['-M virt,virtualization=on,mte=on', '-cpu max,sve=on']) ----------------------------------------------------------------------- The outputs from the tests are: ----------------------------------------------------------------------- $ tools/testing/kunit/kunit.py run --timeout=60 --jobs=`nproc --all` \ --arch=arm64 --cross_compile=aarch64-linux-gnu- \ --qemu_config arm64_kvm_min.py \ --kunitconfig=arch/arm64/kvm/.kunitconfig [20:02:52] Configuring KUnit Kernel ... [20:02:52] Building KUnit Kernel ... Populating config with: $ make ARCH=arm64 olddefconfig CROSS_COMPILE=aarch64-linux-gnu- O=.kunit Building with: $ make ARCH=arm64 --jobs=96 CROSS_COMPILE=aarch64-linux-gnu- O=.kunit [20:02:59] Starting KUnit Kernel (1/1)... [20:02:59] ============================================================ Running tests with: $ qemu-system-aarch64 -nodefaults -m 1024 -kernel .kunit/arch/arm64/boot/Image.gz -append 'mem=1G console=tty kunit_shutdown=halt console=ttyAMA0 kunit_shutdown=reboot' -no-reboot -nographic -serial stdio -M virt,virtualization=on,mte=on -cpu max,sve=on [20:03:00] ========== kvm-sys-regs-test-suite (14 subtests) =========== [20:03:00] [PASSED] vcpu_id_reg_feature_frac_check_test [20:03:00] [PASSED] validate_id_aa64mmfr0_tgran2_test [20:03:01] [PASSED] validate_id_aa64mmfr0_tgran2_test [20:03:01] [PASSED] validate_id_aa64mmfr0_tgran2_test [20:03:01] [PASSED] validate_id_aa64pfr0_el1_test [20:03:01] [PASSED] validate_id_aa64pfr1_el1_test [20:03:01] [PASSED] validate_id_aa64isar0_el1_test [20:03:01] [PASSED] validate_id_aa64isar1_el1_test [20:03:01] [PASSED] validate_id_aa64mmfr0_el1_test [20:03:01] [PASSED] validate_id_aa64mmfr1_el1_test [20:03:01] [PASSED] validate_id_aa64dfr0_el1_test [20:03:01] [PASSED] validate_id_dfr0_el1_test [20:03:01] [PASSED] validate_mvfr1_el1_test [20:03:01] [PASSED] validate_id_reg_test [20:03:01] ============= [PASSED] kvm-sys-regs-test-suite ============= [20:03:01] ============================================================ [20:03:01] Testing complete. Passed: 14, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0 [20:03:01] Elapsed time: 8.534s total, 0.003s configuring, 6.962s building, 1.569s running ----------------------------------------------------------------------- Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/.kunitconfig | 4 + arch/arm64/kvm/Kconfig | 11 + arch/arm64/kvm/sys_regs.c | 4 + arch/arm64/kvm/sys_regs_test.c | 1018 ++++++++++++++++++++++++++++++++ 4 files changed, 1037 insertions(+) create mode 100644 arch/arm64/kvm/.kunitconfig create mode 100644 arch/arm64/kvm/sys_regs_test.c diff --git a/arch/arm64/kvm/.kunitconfig b/arch/arm64/kvm/.kunitconfig new file mode 100644 index 000000000000..c564c98fc319 --- /dev/null +++ b/arch/arm64/kvm/.kunitconfig @@ -0,0 +1,4 @@ +CONFIG_KUNIT=y +CONFIG_VIRTUALIZATION=y +CONFIG_KVM=y +CONFIG_KVM_KUNIT_TEST=y diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 8a5fbbf084df..0d628d0e7dd5 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -56,4 +56,15 @@ config NVHE_EL2_DEBUG If unsure, say N. +config KVM_KUNIT_TEST + bool "KUnit tests for KVM on ARM64 processors" if !KUNIT_ALL_TESTS + depends on KVM && KUNIT + default KUNIT_ALL_TESTS + help + Say Y here to enable KUnit tests for the KVM on ARM64. + Only useful for KVM/ARM development and are not for inclusion into + a production build. + + If unsure, say N. + endif # VIRTUALIZATION diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index b7329075a69f..77a106d255be 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -3686,3 +3686,7 @@ int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval) return __modify_kvm_id_reg(kvm, id, val, preserve_mask); } + +#if IS_ENABLED(CONFIG_KVM_KUNIT_TEST) +#include "sys_regs_test.c" +#endif diff --git a/arch/arm64/kvm/sys_regs_test.c b/arch/arm64/kvm/sys_regs_test.c new file mode 100644 index 000000000000..30603d0623cd --- /dev/null +++ b/arch/arm64/kvm/sys_regs_test.c @@ -0,0 +1,1018 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * KUnit tests for arch/arm64/kvm/sys_regs.c. + */ + +#include +#include +#include +#include +#include +#include "asm/sysreg.h" + +/* + * Create a vcpu with the minimum fields required for testing in this file + * including the struct kvm. Any resources that are allocated by this + * function must be allocated by kunit_* so that we don't need to explicitly + * free them. + */ +static struct kvm_vcpu *test_kvm_vcpu_init(struct kunit *test) +{ + struct kvm_vcpu *vcpu; + struct kvm *kvm; + + kvm = kunit_kzalloc(test, sizeof(struct kvm), GFP_KERNEL); + if (!kvm) + return NULL; + + vcpu = kunit_kzalloc(test, sizeof(struct kvm_vcpu), GFP_KERNEL); + if (!vcpu) { + kunit_kfree(test, kvm); + return NULL; + } + + vcpu->cpu = -1; + vcpu->kvm = kvm; + vcpu->vcpu_id = 0; + + return vcpu; +} + +static void test_kvm_vcpu_fini(struct kunit *test, struct kvm_vcpu *vcpu) +{ + if (vcpu->kvm) + kunit_kfree(test, vcpu->kvm); + + kunit_kfree(test, vcpu); +} + +/* Test parameter information to test arm64_check_features */ +struct check_features_test { + u64 check_types; + u64 value; + u64 limit; + int expected; +}; + + +/* Used to define test parameters of vcpu_id_reg_feature_frac_check_test() */ +struct feat_info { + u32 id; + u32 shift; + u32 value; + u32 limit; +}; + +struct frac_check_test { + struct feat_info feat; + struct feat_info frac_feat; + int ret; +}; + +#define FRAC_FEAT(id, shift, value, limit) {id, shift, value, limit} + +/* Tests parameters of vcpu_id_reg_feature_frac_check_test() */ +struct frac_check_test frac_params[] = { + { + /* + * The feature value is smaller than its limit. + * Expect no error regardless of the frac value. + */ + FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2), + FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 1), + 0, + }, + { + /* + * The feature value is smaller than its limit. + * Expect no error regardless of the frac value. + */ + FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2), + FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 2), + 0, + }, + { + /* + * The feature value is smaller than its limit. + * Expect no error regardless of the frac value. + */ + FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2), + FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 2, 1), + 0, + }, + { + /* + * Both the feature and frac values are same as their limits. + * Expect no error. + */ + FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1), + FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 1), + 0, + }, + { + /* + * The feature value is same as its limit, and the frac value + * is smaller than its limit. Expect no error. + */ + FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1), + FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 2), + 0, + }, + { + /* + * The feature value is same as its limit, and the frac value + * is larger than its limit. Expect an error. + */ + FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1), + FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 2, 1), + -E2BIG, + }, + +}; + +static void frac_case_to_desc(struct frac_check_test *t, char *desc) +{ + struct feat_info *feat = &t->feat; + struct feat_info *frac = &t->frac_feat; + + snprintf(desc, KUNIT_PARAM_DESC_SIZE, + "feat - shift:%d, val:%d, lim:%d, frac - shift:%d, val:%d, lim:%d\n", + feat->shift, feat->value, feat->limit, + frac->shift, frac->value, frac->limit); +} + +KUNIT_ARRAY_PARAM(frac, frac_params, frac_case_to_desc); + +/* Tests for vcpu_id_reg_feature_frac_check(). */ +static void vcpu_id_reg_feature_frac_check_test(struct kunit *test) +{ + struct kvm_vcpu *vcpu; + u32 id, frac_id; + struct id_reg_info id_data, frac_id_data; + struct id_reg_info *idr, *frac_idr; + struct feature_frac frac_data, *frac = &frac_data; + const struct frac_check_test *frct = test->param_value; + + vcpu = test_kvm_vcpu_init(test); + KUNIT_ASSERT_TRUE(test, vcpu); + + id = frct->feat.id; + frac_id = frct->frac_feat.id; + + frac->id = id; + frac->shift = frct->feat.shift; + frac->frac_id = frac_id; + frac->frac_shift = frct->frac_feat.shift; + + idr = GET_ID_REG_INFO(id); + frac_idr = GET_ID_REG_INFO(frac_id); + + /* Save the original id_reg_info (and restore later) */ + memcpy(&id_data, idr, sizeof(id_data)); + memcpy(&frac_id_data, frac_idr, sizeof(frac_id_data)); + + /* The id could be same as the frac_id */ + idr->vcpu_limit_val = (u64)frct->feat.limit << frac->shift; + frac_idr->vcpu_limit_val |= + (u64)frct->frac_feat.limit << frac->frac_shift; + + write_kvm_id_reg(vcpu->kvm, id, (u64)frct->feat.value << frac->shift); + write_kvm_id_reg(vcpu->kvm, frac_id, + (u64)frct->frac_feat.value << frac->frac_shift); + + KUNIT_EXPECT_EQ(test, + vcpu_id_reg_feature_frac_check(vcpu, frac), + frct->ret); + + /* Restore id_reg_info */ + memcpy(idr, &id_data, sizeof(id_data)); + memcpy(frac_idr, &frac_id_data, sizeof(frac_id_data)); +} + +/* + * Test parameter information to test validate_id_aa64mmfr0_tgran2 + * and validate_id_aa64mmfr0_el1_test. + */ +struct tgran_test { + int gran2_field; + int gran2; + int gran2_lim; + int gran1; + int gran1_lim; + int ret; +}; + +/* + * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran4_2. + * Defined values for the field are: + * 0x0: Support for 4KB granule at stage 2 is identified in TGran4. + * 0x1: 4KB granule not supported at stage 2. + * 0x2: 4KB granule supported at stage 2. + * 0x3: 4KB granule at stage 2 supports 52-bit input and output addresses. + * + * Defined values for the TGran4 are: + * 0x0: 4KB granule supported. + * 0x1: 4KB granule supports 52-bit input and output addresses. + * 0xf: 4KB granule not supported. + */ +struct tgran_test tgran4_2_test_params[] = { + /* Enable 4KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 2, 0, 0, 0}, + /* Enable 4KB granule on the host that doesn't support the granule */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 1, 0, 0, -E2BIG}, + /* Disable 4KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 2, 0, 0, 0}, + /* Enable 4KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 0, 0, 0, 0}, + /* Disable 4KB granule */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 1, 0xf, 0, 0}, + /* Enable 4KB granule on the host that doesn't support the granule */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 1, 0, 0, -E2BIG}, + /* Disable 4KB granule */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 2, 0xf, 0, 0}, + /* + * Enable 4KB granule with 52 bit address on the host that doesn't + * support it. + */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 2, 1, 0, -E2BIG}, + /* Disable 4KB granule */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 0, 0, 0xf, 0}, + /* Disable 4KB granule */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 0, 0, 0, 0}, + /* Enable 4KB granule on the host that doesn't support the granule */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 0, 0xf, 0xf, -E2BIG}, + /* Enable 4KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 0, 0, 0, 0}, +}; + +/* + * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran64_2. + * Defined values for the field are: + * 0x0: Support for 64KB granule at stage 2 is identified in TGran64. + * 0x1: 64KB granule not supported at stage 2. + * 0x2: 64KB granule supported at stage 2. + * 0x3: 64KB granule at stage 2 supports 52-bit input and output addresses. + * + * Defined values for the TGran64 are: + * 0x0: 64KB granule supported. + * 0xf: 64KB granule not supported. + */ +struct tgran_test tgran64_2_test_params[] = { + /* Enable 64KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 2, 0, 0, 0}, + /* Enable 64KB granule on the host that doesn't support the granule */ + {ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 1, 0, 0, -E2BIG}, + /* Enable 64KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 2, 0, 0, 0}, + /* Enable 64KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 0, 0, 0, 0}, + /* Disable 64KB granule */ + {ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 1, 0xf, 0, 0}, + /* Enable 64KB granule on the host that doesn't support the granule */ + {ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 1, 0, 0, -E2BIG}, + /* Disable 64KB granule */ + {ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 2, 0xf, 0, 0}, + /* Disable 64KB granule */ + {ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 0, 0, 0xf, 0}, + /* Disable 64KB granule */ + {ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 0, 0, 0, 0}, + /* Enable 64KB granule on the host that doesn't support the granule */ + {ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 0, 0xf, 0xf, -E2BIG}, + /* Enable 64KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 0, 0, 0, 0}, +}; + +/* + * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran16_2 + * Defined values for the field are: + * 0x0: Support for 16KB granule at stage 2 is identified in TGran16. + * 0x1: 16KB granule not supported at stage 2. + * 0x2: 16KB granule supported at stage 2. + * 0x3: 16KB granule at stage 2 supports 52-bit input and output addresses. + * + * Defined values for the TGran16 are: + * 0x0: 16KB granule not supported. + * 0x1: 16KB granule supported. + * 0x2: 16KB granule supports 52-bit input and output addresses. + */ +struct tgran_test tgran16_2_test_params[] = { + /* Enable 16KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 2, 0, 0, 0}, + /* Enable 16KB granule on the host that doesn't support the granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 1, 0, 0, -E2BIG}, + /* Disable 16KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 2, 0, 0, 0}, + /* Disable 16KB granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 0, 0, 0, 0}, + /* Disable 16KB granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 1, 0, 0, 0}, + /* Enable 16KB granule on the host that doesn't support the granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 1, 1, 0, -E2BIG}, + /* Disable 16KB granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 2, 0, 0, 0}, + /* + * Enable 16KB granule with 52 bit address on the host that doesn't + * support it. + */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 2, 2, 2, -E2BIG}, + /* Disable 16KB granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 0, 0, 0, 0}, + /* Disable 16KB granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 0, 0, 1, 0}, + /* Enable 16KB granule on the host that doesn't support the granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0, 0, 0, -E2BIG}, + /* Enable 16KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0, 0, 1, 0}, + /* Enable 16KB granule on the host that supports the granule */ + {ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0, 0, 2, 0}, +}; + +static void tgran2_case_to_desc(struct tgran_test *t, char *desc) +{ + snprintf(desc, KUNIT_PARAM_DESC_SIZE, + "gran2(field=%d): val=%d, lim=%d gran1: val=%d limit=%d\n", + t->gran2_field, t->gran2, t->gran2_lim, + t->gran1, t->gran1_lim); +} + +KUNIT_ARRAY_PARAM(tgran4_2, tgran4_2_test_params, tgran2_case_to_desc); +KUNIT_ARRAY_PARAM(tgran64_2, tgran64_2_test_params, tgran2_case_to_desc); +KUNIT_ARRAY_PARAM(tgran16_2, tgran16_2_test_params, tgran2_case_to_desc); + +#define MAKE_MMFR0_TGRAN(shift1, gran1, shift2, gran2) \ + (((u64)((gran1) & 0xf) << (shift1)) | \ + ((u64)((gran2) & 0xf) << (shift2))) + +/* Return the bit position of TGranX field for the given TGranX_2 field. */ +static int tgran2_to_tgran1_shift(int tgran2_shift) +{ + int tgran1_shift = -1; + + switch (tgran2_shift) { + case ID_AA64MMFR0_TGRAN4_2_SHIFT: + tgran1_shift = ID_AA64MMFR0_TGRAN4_SHIFT; + break; + case ID_AA64MMFR0_TGRAN64_2_SHIFT: + tgran1_shift = ID_AA64MMFR0_TGRAN64_SHIFT; + break; + case ID_AA64MMFR0_TGRAN16_2_SHIFT: + tgran1_shift = ID_AA64MMFR0_TGRAN16_SHIFT; + break; + default: + break; + } + + return tgran1_shift; +} + +/* Tests for validate_id_aa64mmfr0_el1(). */ +static void validate_id_aa64mmfr0_tgran2_test(struct kunit *test) +{ + const struct tgran_test *t = test->param_value; + int shift1, shift2; + u64 v, lim; + + shift2 = t->gran2_field; + shift1 = tgran2_to_tgran1_shift(shift2); + v = MAKE_MMFR0_TGRAN(shift1, t->gran1, shift2, t->gran2); + lim = MAKE_MMFR0_TGRAN(shift1, t->gran1_lim, shift2, t->gran2_lim); + + KUNIT_EXPECT_EQ(test, aa64mmfr0_tgran2_check(shift2, v, lim), t->ret); +} + +/* Tests for validate_id_aa64pfr0_el1(). */ +static void validate_id_aa64pfr0_el1_test(struct kunit *test) +{ + struct id_reg_info *id_reg; + struct kvm_vcpu *vcpu; + u64 v; + + vcpu = test_kvm_vcpu_init(test); + KUNIT_ASSERT_TRUE(test, vcpu); + + id_reg = GET_ID_REG_INFO(SYS_ID_AA64PFR0_EL1); + + v = 0; + KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0); + + /* + * Tests for GIC. + * GIC must be 1 when vGIC3 is configured. + */ + v = 0x0000000; /* GIC = 0 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0); + + /* Test with VGIC_V2 */ + vcpu->kvm->arch.vgic.in_kernel = true; + vcpu->kvm->arch.vgic.vgic_model = KVM_DEV_TYPE_ARM_VGIC_V2; + + v = 0x0000000; /* GIC = 0 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0); + + /* Test with VGIC_V3 */ + vcpu->kvm->arch.vgic.vgic_model = KVM_DEV_TYPE_ARM_VGIC_V3; + + v = 0x0000000; /* GIC = 0 */ + KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0); + v = 0x1000000; /* GIC = 1 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0); + + /* Restore the original VGIC state */ + vcpu->kvm->arch.vgic.in_kernel = false; + vcpu->kvm->arch.vgic.vgic_model = 0; + + /* + * Tests for AdvSIMD/FP. + * AdvSIMD must have the same value as FP. + */ + + /* Tests with SVE disabled */ + v = 0x000010000; /* AdvSIMD = 0, FP = 1 */ + KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0); + + v = 0x000100000; /* AdvSIMD = 1, FP = 0 */ + KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0); + + v = 0x000ff0000; /* AdvSIMD = 0xf, FP = 0xf */ + KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0); + + v = 0x100000000; /* SVE =1, AdvSIMD = 0, FP = 0 */ + KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0); + if (!system_supports_sve()) { + kunit_skip(test, "No SVE support. Partial skip)"); + /* Not reached */ + } + + /* Tests with SVE enabled */ + vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE; + + v = 0x100000000; /* SVE =1, AdvSIMD = 0, FP = 0 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0); + + v = 0x100ff0000; /* SVE =1, AdvSIMD = 0, FP = 0 */ + KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0); + + vcpu->arch.flags &= ~KVM_ARM64_GUEST_HAS_SVE; +} + +/* Tests for validate_id_aa64pfr1_el1() */ +static void validate_id_aa64pfr1_el1_test(struct kunit *test) +{ + struct id_reg_info *id_reg; + struct kvm_vcpu *vcpu; + u64 v; + + vcpu = test_kvm_vcpu_init(test); + KUNIT_ASSERT_TRUE(test, vcpu); + + id_reg = GET_ID_REG_INFO(SYS_ID_AA64PFR1_EL1); + v = 0; + KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0); + + /* Tests for MTE */ + + /* Tests with MTE disabled */ + KUNIT_EXPECT_FALSE(test, vcpu->kvm->arch.mte_enabled); + + v = 0x000; /* MTE = 0 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0); + + v = 0x100; /* MTE = 1*/ + KUNIT_EXPECT_NE(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0); + + if (!system_supports_mte()) { + kunit_skip(test, "(No MTE support. Partial skip)"); + /* Not reached */ + } + + /* Tests with MTE enabled */ + vcpu->kvm->arch.mte_enabled = true; + + v = 0x100; /* MTE = 1*/ + KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0); + + v = 0x0; /* MTE = 0 */ + KUNIT_EXPECT_NE(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0); +} + +/* Tests for validate_id_aa64isar0_el1(). */ +static void validate_id_aa64isar0_el1_test(struct kunit *test) +{ + struct id_reg_info *id_reg; + struct kvm_vcpu *vcpu; + u64 v; + + vcpu = test_kvm_vcpu_init(test); + KUNIT_ASSERT_TRUE(test, vcpu); + + id_reg = GET_ID_REG_INFO(SYS_ID_AA64ISAR0_EL1); + + v = 0; + KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + /* + * Tests for SM3/SM4. + * Arm ARM says SM3 must have the same value as SM4. + */ + + v = 0x01000000000; /* SM4 = 0, SM3 = 1 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + v = 0x10000000000; /* SM4 = 1, SM3 = 0 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + v = 0x11000000000; /* SM3 = SM4 = 1 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + + /* + * Tests for SHA1/SHA2/SHA3. Arm ARM says: + * If SHA1 is 0x0, both SHA2 and SHA3 must be 0x0. + * If SHA2 is 0x0, SHA1 must be 0x0. + * If SHA2 is 0x2, SHA3 must be 0x1. + * If SHA3 is 0x1, SHA2 msut be 0x2. + */ + + v = 0x000000100; /* SHA2 = 0, SHA1 = 1 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + v = 0x000001000; /* SHA2 = 1, SHA1 = 0 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + v = 0x000001100; /* SHA2 = 1, SHA1 = 1 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + v = 0x100002000; /* SHA3 = 1, SHA2 = 2 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + v = 0x000002000; /* SHA3 = 0, SHA2 = 2 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + v = 0x100001000; /* SHA3 = 1, SHA2 = 1 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + v = 0x200000000; /* SHA3 = 2, SHA1 = 0 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + v = 0x200001100; /* SHA3 = 2, SHA2= 1, SHA1 = 1 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); + + v = 0x300003300; /* SHA3 = 3, SHA2 = 3, SHA1 = 3 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0); +} + +/* Tests for validate_id_aa64isar1_el1() */ +static void validate_id_aa64isar1_el1_test(struct kunit *test) +{ + struct id_reg_info *id_reg; + struct kvm_vcpu *vcpu; + u64 v; + + vcpu = test_kvm_vcpu_init(test); + KUNIT_ASSERT_TRUE(test, vcpu); + + id_reg = GET_ID_REG_INFO(SYS_ID_AA64ISAR1_EL1); + + v = 0; + KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0); + + /* + * Tests for GPI/GPA/API/APA. + * Arm ARM says: + * If GPA is non-zero, GPI must be zero. + * If GPI is non-zero, GPA must be zero. + * If APA is non-zero, API must be zero. + * If API is non-zero, APA must be zero. + */ + + /* Tests with PTRAUTH disabled */ + v = 0x11000110; /* GPI = 1, GPA = 1, API = 1, APA = 1 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0); + + v = 0x11000100; /* GPI = 1, GPA = 1, API = 1, APA = 0 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0); + + v = 0x11000010; /* GPI = 1, GPA = 1, API = 0, APA = 1 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0); + + v = 0x10000110; /* GPI = 1, GPA = 0, API = 1, APA = 1 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0); + + v = 0x01000110; /* GPI = 0, GPA = 1, API = 1, APA = 1 */ + KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0); + + if (!system_has_full_ptr_auth()) { + kunit_skip(test, "(No PTRAUTH support. Partial skip)"); + /* Not reached */ + } + + /* Tests with PTRAUTH enabled */ + vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH; + + v = 0x10000100; /* GPI = 1, GPA = 0, API = 1, APA = 0 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0); + + v = 0x10000010; /* GPI = 1, GPA = 0, API = 0, APA = 1 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0); + + v = 0x01000100; /* GPI = 0, GPA = 1, API = 1, APA = 0 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0); + + v = 0x01000010; /* GPI = 0, GPA = 1, API = 0, APA = 1 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0); + + v = 0; + KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0); +} + +/* Tests for validate_id_aa64mmfr0_el1() */ +static void validate_id_aa64mmfr0_el1_test(struct kunit *test) +{ + struct id_reg_info id_data, *id_reg; + const struct tgran_test *t4, *t64, *t16; + struct kvm_vcpu *vcpu; + int field4, field4_2, field64, field64_2, field16, field16_2; + u64 v, v4, lim4, v64, lim64, v16, lim16; + int i, j, ret; + + id_reg = GET_ID_REG_INFO(SYS_ID_AA64MMFR0_EL1); + + /* Save the original id_reg_info (and restore later) */ + memcpy(&id_data, id_reg, sizeof(id_data)); + + vcpu = test_kvm_vcpu_init(test); + + t4 = test->param_value; + field4_2 = t4->gran2_field; + field4 = tgran2_to_tgran1_shift(field4_2); + v4 = MAKE_MMFR0_TGRAN(field4, t4->gran1, field4_2, t4->gran2); + lim4 = MAKE_MMFR0_TGRAN(field4, t4->gran1_lim, field4_2, t4->gran2_lim); + + /* + * For each given gran4_2 params, test validate_id_aa64mmfr0_el1 + * with each of tgran64_2 and tgran16_2 params. + */ + for (i = 0; i < ARRAY_SIZE(tgran64_2_test_params); i++) { + t64 = &tgran64_2_test_params[i]; + field64_2 = t64->gran2_field; + field64 = tgran2_to_tgran1_shift(field64_2); + v64 = MAKE_MMFR0_TGRAN(field64, t64->gran1, + field64_2, t64->gran2); + lim64 = MAKE_MMFR0_TGRAN(field64, t64->gran1_lim, + field64_2, t64->gran2_lim); + + for (j = 0; j < ARRAY_SIZE(tgran16_2_test_params); j++) { + t16 = &tgran16_2_test_params[j]; + + field16_2 = t16->gran2_field; + field16 = tgran2_to_tgran1_shift(field16_2); + v16 = MAKE_MMFR0_TGRAN(field16, t16->gran1, + field16_2, t16->gran2); + lim16 = MAKE_MMFR0_TGRAN(field16, t16->gran1_lim, + field16_2, t16->gran2_lim); + + /* Build id_aa64mmfr0_el1 from tgran16/64/4 values */ + v = v16 | v64 | v4; + id_reg->vcpu_limit_val = lim16 | lim64 | lim4; + + ret = t4->ret ? t4->ret : t64->ret; + ret = ret ? ret : t16->ret; + KUNIT_EXPECT_EQ(test, + validate_id_aa64mmfr0_el1(vcpu, id_reg, v), + ret); + } + } + + /* Restore id_reg_info */ + memcpy(id_reg, &id_data, sizeof(id_data)); +} + +/* Tests for validate_id_aa64mmfr1_el1() */ +static void validate_id_aa64mmfr1_el1_test(struct kunit *test) +{ + struct id_reg_info id_data, *id_reg; + struct kvm_vcpu *vcpu; + u64 v; + + id_reg = GET_ID_REG_INFO(SYS_ID_AA64MMFR1_EL1); + vcpu = test_kvm_vcpu_init(test); + KUNIT_ASSERT_TRUE(test, vcpu); + + /* Save the original id_reg_info (and restore later) */ + memcpy(&id_data, id_reg, sizeof(id_data)); + + /* Test for HADBS */ + id_reg->vcpu_limit_val = 0; /* HADBS = 0 */ + v = 0; /* HADBS = 0 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64mmfr1_el1(vcpu, id_reg, v), 0); + + id_reg->vcpu_limit_val = 2; /* HADBS = 2 */ + v = 2; /* HADBS = 2 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64mmfr1_el1(vcpu, id_reg, v), 0); + + v = 1; /* HADBS = 1 */ + KUNIT_EXPECT_NE(test, validate_id_aa64mmfr1_el1(vcpu, id_reg, v), 0); + + v = 0; /* HADBS = 0 */ + KUNIT_EXPECT_NE(test, validate_id_aa64mmfr1_el1(vcpu, id_reg, v), 0); + + memcpy(id_reg, &id_data, sizeof(id_data)); +} + +/* Tests for validate_id_aa64dfr0_el1() */ +static void validate_id_aa64dfr0_el1_test(struct kunit *test) +{ + struct id_reg_info *id_reg; + struct kvm_vcpu *vcpu; + u64 v; + + id_reg = GET_ID_REG_INFO(SYS_ID_AA64DFR0_EL1); + vcpu = test_kvm_vcpu_init(test); + KUNIT_ASSERT_TRUE(test, vcpu); + + v = 0; + KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0); + + /* + * Tests for CTX_CMPS/BRPS. + * Number of context-aware breakpoints can be no more than number + * of supported breakpoints. + */ + v = 0x10001000; /* CTX_CMPS = 1, BRPS = 1 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0); + + v = 0x20001000; /* CTX_CMPS = 2, BRPS = 1 */ + KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0); + + /* Tests for PMUVer */ + + /* Tests with PMUv3 disabled. */ + + v = 0x000; /* PMUVER = 0x0 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0); + + v = 0xf00; /* PMUVER = 0xf */ + KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0); + + v = 0x100; /* PMUVER = 1 */ + KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0); + + /* Tests with PMUv3 enabled */ + set_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features); + + v = 0x000; /* PMUVER = 0x0 */ + KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0); + + v = 0x000; /* PMUVER = 0xf */ + KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0); + + v = 0x100; /* PMUVER = 1 */ + KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0); +} + +/* Tests for validate_id_dfr0_el1() */ +static void validate_id_dfr0_el1_test(struct kunit *test) +{ + struct id_reg_info *id_reg; + struct kvm_vcpu *vcpu; + u64 v; + + id_reg = GET_ID_REG_INFO(SYS_ID_DFR0_EL1); + vcpu = test_kvm_vcpu_init(test); + KUNIT_ASSERT_TRUE(test, vcpu); + + v = 0; + KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0); + + /* Tests for PERFMON */ + + /* Tests with PMUv3 disabled */ + + v = 0x0000000; /* PERFMON = 0x0 */ + KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0); + + v = 0xf000000; /* PERFMON = 0xf */ + KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0); + + v = 0x1000000; /* PERFMON = 1 */ + KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0); + + v = 0x2000000; /* PERFMON = 2 */ + KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0); + + v = 0x3000000; /* PERFMON = 3 */ + KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0); + + + /* Tests with PMUv3 enabled */ + set_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features); + + v = 0x0000000; /* PERFMON = 0x0 */ + KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0); + + v = 0xf000000; /* PERFMON = 0xf */ + KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0); + + v = 0x1000000; /* PERFMON = 1 */ + KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0); + + v = 0x2000000; /* PERFMON = 2 */ + KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0); + + v = 0x3000000; /* PERFMON = 3 */ + KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0); +} + +/* Tests for validate_mvfr1_el1(). */ +static void validate_mvfr1_el1_test(struct kunit *test) +{ + struct id_reg_info *id_reg; + struct kvm_vcpu *vcpu; + u64 v; + + id_reg = GET_ID_REG_INFO(SYS_MVFR1_EL1); + vcpu = test_kvm_vcpu_init(test); + KUNIT_ASSERT_TRUE(test, vcpu); + + v = 0; + KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0); + + /* + * Tests for FPHP/SIMDHP. + * Arm ARM says the level of support indicated by FPHP must be + * equivalent to the level of support indicated by the SIMDHP, + * meaning the permitted values are: + * FPHP = 0x0, SIMDHP = 0x0 + * FPHP = 0x2, SIMDHP = 0x1 + * FPHP = 0x3, SIMDHP = 0x2 + */ + v = 0x0000000; /* FPHP = 0, SIMDHP = 0 */ + KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0); + + v = 0x2100000; /* FPHP = 2, SIMDHP = 1 */ + KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0); + + v = 0x3200000; /* FPHP = 3, SIMDHP = 2 */ + KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0); + + v = 0x1100000; /* FPHP = 1, SIMDHP = 1 */ + KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0); + + v = 0x2200000; /* FPHP = 2, SIMDHP = 2 */ + KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0); + + v = 0x3300000; /* FPHP = 3, SIMDHP = 3 */ + KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0); + + v = (u64)-1; + KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0); +} + +/* + * Helper function for validate_id_reg_test(). + * We don't use KUNIT_ASSERT or kunit_skip because this is a helper test + * function and we are not sure if it's safe to exist from the test case. + */ +static void validate_id_reg_test_one_field(struct kunit *test, + u32 id, int pos, int fval, int flimit, + bool is_signed, struct id_reg_info *idr) +{ + struct kvm_vcpu *vcpu; + int fmin = is_signed ? -1 : 0; + int fmax = is_signed ? 7 : 15; + u64 fmask = ARM64_FEATURE_FIELD_MASK; + u64 val; + + vcpu = test_kvm_vcpu_init(test); + KUNIT_ASSERT_TRUE(test, vcpu); + + if (flimit > fmax) { + /* Shouldn't happen. Make the test failure. */ + KUNIT_EXPECT_FALSE(test, flimit > fmax); + kunit_err(test, "%s: flimit(%d) > fmax(%d). Must be test bug", + __func__, flimit, fmax); + return; + } + + if (fval > fmin) { + /* Set the field to a smaller value */ + val = ((u64)(fval - 1) & fmask) << pos; + KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, id, val), 0); + } + + if (fval < flimit) { + /* Set the field to a larger value, but smaller than flimit */ + val = ((u64)(fval + 1) & fmask) << pos; + KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, id, val), 0); + /* Set the field to the flimit */ + val = ((u64)flimit & fmask) << pos; + KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, id, val), 0); + } + + if (flimit < fmax) { + /* Set the field to a larger value than flimit */ + val = ((u64)(flimit + 1) & fmask) << pos; + KUNIT_EXPECT_NE(test, validate_id_reg(vcpu, id, val), 0); + + /* Test with ignore_mask */ + if (idr) { + idr->ignore_mask = fmask << pos; + KUNIT_EXPECT_EQ(test, + validate_id_reg(vcpu, id, val), + 0); + } + } + test_kvm_vcpu_fini(test, vcpu); +} + +/* + * Test for validate_id_reg(). + */ +static void validate_id_reg_test(struct kunit *test) +{ + struct id_reg_info idr_data, *idr, *original_idr; + u32 id; + int fval, flim, pos; + u64 val; + bool sign; + + /* Use AA64PFR0_EL1 because it includes both sign/unsigned fields */ + id = SYS_ID_AA64PFR0_EL1; + + /* Save the original id_reg_info */ + original_idr = GET_ID_REG_INFO(id); + + /* Test with a temporary id_reg_info for testing */ + idr = &idr_data; + GET_ID_REG_INFO(id) = idr; + + fval = 0x1; + flim = 0x2; + + /* Test with unsigned field */ + pos = ID_AA64PFR0_RAS_SHIFT; + + /* Set up id_reg_info for testing */ + memset(idr, 0, sizeof(*idr)); + idr->sys_reg = id; + idr->vcpu_limit_val = (u64)flim << pos; + validate_id_reg_test_one_field(test, id, pos, fval, flim, false, idr); + + /* Test with signed field */ + pos = ID_AA64PFR0_FP_SHIFT; + + /* Set up id_reg_info for testing */ + memset(idr, 0, sizeof(*idr)); + idr->sys_reg = id; + idr->vcpu_limit_val = (u64)flim << pos; + validate_id_reg_test_one_field(test, id, pos, fval, flim, true, idr); + + + /* Test without id_reg_info */ + GET_ID_REG_INFO(id) = NULL; + if (original_idr) + val = original_idr->vcpu_limit_val; + else + val = read_sanitised_ftr_reg(id); + + for (pos = 0; pos < 64; pos += 4) { + if (pos == ID_AA64PFR0_FP_SHIFT || + pos == ID_AA64PFR0_ASIMD_SHIFT) + sign = true; + else + sign = false; + + fval = cpuid_feature_extract_field(val, pos, sign); + validate_id_reg_test_one_field(test, id, pos, fval, fval, + sign, NULL); + } + + /* Restore the original id_reg_info */ + GET_ID_REG_INFO(id) = original_idr; +} + +static struct kunit_case kvm_sys_regs_test_cases[] = { + KUNIT_CASE_PARAM(vcpu_id_reg_feature_frac_check_test, frac_gen_params), + KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran4_2_gen_params), + KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran64_2_gen_params), + KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran16_2_gen_params), + KUNIT_CASE(validate_id_aa64pfr0_el1_test), + KUNIT_CASE(validate_id_aa64pfr1_el1_test), + KUNIT_CASE(validate_id_aa64isar0_el1_test), + KUNIT_CASE(validate_id_aa64isar1_el1_test), + KUNIT_CASE_PARAM(validate_id_aa64mmfr0_el1_test, tgran4_2_gen_params), + KUNIT_CASE(validate_id_aa64mmfr1_el1_test), + KUNIT_CASE(validate_id_aa64dfr0_el1_test), + KUNIT_CASE(validate_id_dfr0_el1_test), + KUNIT_CASE(validate_mvfr1_el1_test), + KUNIT_CASE(validate_id_reg_test), + {} +}; + +static struct kunit_suite kvm_sys_regs_test_suite = { + .name = "kvm-sys-regs-test-suite", + .test_cases = kvm_sys_regs_test_cases, +}; + +kunit_test_suites(&kvm_sys_regs_test_suite); +MODULE_LICENSE("GPL"); From patchwork Mon Feb 14 06:57:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745056 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C0D23C433F5 for ; Mon, 14 Feb 2022 07:10:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WRg5IrGug5RRYUHCs6nhkROaUzK3Owxg2A9u91g1+8k=; b=KE0gTW8KNZTHHjFQEp2eLmKhO4 eWMihXlTPPrPpci+CpdC5AbIk+j1+brmFCBJzGynPJ4zkcgmzbJb5/xhgp0lJLpx+pb1muStCmb6Z ZyQuISg+Yl+5IGV1Q3g3EJw55KgzN7ATRgUMjUkOIgWbrXCIpV8zDySlICSSnWNGueFY0YS977Scg wtEzAjcWzCRLqtyJO3nF+eKDCqcpTzFxEaDwJwAvvkpp/2+PLTbTEoNMIwMrsfgGpojtApKRZ/oA8 gBkaF3/hR4miBYF/jb0nsucxwBkHW+rxkvVGZ689hHd6DFhVf4gPGqG3hIXcX89/uR462IhU2k0hc 4oCAasbA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVTq-00DaXe-Su; Mon, 14 Feb 2022 07:09:03 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVM9-00DXKP-JW for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:01:08 +0000 Received: by mail-pl1-x649.google.com with SMTP id q23-20020a170902edd700b0014ed722ba9cso3283147plk.18 for ; Sun, 13 Feb 2022 23:01:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WUj/Kzsj/O69VN04jOe/HZ0VXGC+h38PtBvDY3Gh//Y=; b=eW4zexmXN7mIa9bVu6vjBGHMgLB/7rpjgnrLO2K6f13JLDDsfUaBLvSc25IiD1Eg67 qlWF+3fKTXbdHykZz6n+oqRY+ZaBxhJOhLmcYzY8+OGuUKTx6mYivvnHJsWLcfpOhxaO LPoJRuijI+7QGo3jzDGcKMw5LK6LvH0o3vq5Lfbo5FvC112L5bQEX33UOPPvb0/LQD2m Gy0xVhzeFU4hd2lDfcGak0sn5As5kwbms8ZRTKDXBX/OMJAqF9/2pEK7hE5Ik4gY9lZ4 cf6SHxybPS5+BJ5njXZ8Ijp0uzJIre2hPoGqCon/10t4LlTHdTTgKo+tPeECnB2ZwzCT zr0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WUj/Kzsj/O69VN04jOe/HZ0VXGC+h38PtBvDY3Gh//Y=; b=PbiFHkykKk25cVF1pf1QOjplHeHtQ+t1RvJ/hoMZg66oZJzbPZi+4JGivu8jxMECRL S1PrBgTeqBRibfF+6TOBoFpL3d7tvGJNuSf3XRphtjzmS+SOMnj0PwU9N5JiKCkznqM5 V39jeYArlUxsGp4KwlEcPuo1G5U0yDg8pjb8bEQtW6OwVR/wHV+hzjXtFG9Mot/RQA/0 KsQ8Dz3YjwArz31+x6Qx8fZCvHA50Rk0jOw4IwuFM5KAxPNjRIVX+DdJu/A3K7ZvjLyO CAg2yMrYI1fJzRT07fX4C/wc7PtfGhrNCKZK+mgZH6+cC7IPpKm15lMKbwYS7qC+Wzi0 5PgA== X-Gm-Message-State: AOAM532DUOzWAhGYCM7ceSIB5qd8C4yoPbhvpoJ5sj1IiZa8r7MEuB0X sjkiMgq/Mj9l+0kG37XXWlv6RyOfWio= X-Google-Smtp-Source: ABdhPJwlxYKRRMUib49hqRqrKo6wQy1TeHvOv+L2pJ6nWfGNBhjTit40UW0kH8YlBovuBGldn0XBTYvEoA0= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:902:cec5:: with SMTP id d5mr12907817plg.143.1644822064037; Sun, 13 Feb 2022 23:01:04 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:37 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-19-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 18/27] KVM: arm64: Use vcpu->arch cptr_el2 to track value of cptr_el2 for VHE From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230105_692384_3C17030E X-CRM114-Status: GOOD ( 15.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Track the baseline guest value for cptr_el2 in struct kvm_vcpu_arch for VHE. Use this value when setting cptr_el2 for the guest. Currently this value is unchanged, but the following patches will set trapping bits based on features supported for the guest. No functional change intended. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_arm.h | 16 ++++++++++++++++ arch/arm64/kvm/arm.c | 5 ++++- arch/arm64/kvm/hyp/vhe/switch.c | 14 ++------------ 3 files changed, 22 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 01d47c5886dc..8ab6ea038721 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -288,6 +288,22 @@ GENMASK(19, 14) | \ BIT(11)) +/* + * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to + * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2, + * except for some missing controls, such as TAM. + * In this case, CPTR_EL2.TAM has the same position with or without + * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM + * shift value for trapping the AMU accesses. + */ +#define CPTR_EL2_VHE_GUEST_DEFAULT (CPACR_EL1_TTA | CPTR_EL2_TAM) + +/* + * Bits that are copied from vcpu->arch.cptr_el2 to set cptr_el2 for + * guest with VHE. + */ +#define CPTR_EL2_VHE_GUEST_TRACKED_MASK (CPACR_EL1_TTA | CPTR_EL2_TAM) + /* Hyp Debug Configuration Register bits */ #define MDCR_EL2_E2TB_MASK (UL(0x3)) #define MDCR_EL2_E2TB_SHIFT (UL(24)) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 68ffced5b09e..7bb744bb23ce 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1182,7 +1182,10 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, } vcpu_reset_hcr(vcpu); - vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT; + if (has_vhe()) + vcpu->arch.cptr_el2 = CPTR_EL2_VHE_GUEST_DEFAULT; + else + vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT; /* * Handle the "start in power-off" case. diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 11d053fdd604..ed01c4ee9953 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -37,20 +37,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu) ___activate_traps(vcpu); val = read_sysreg(cpacr_el1); - val |= CPACR_EL1_TTA; + val &= ~CPTR_EL2_VHE_GUEST_TRACKED_MASK; + val |= (vcpu->arch.cptr_el2 & CPTR_EL2_VHE_GUEST_TRACKED_MASK); val &= ~CPACR_EL1_ZEN; - /* - * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to - * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2, - * except for some missing controls, such as TAM. - * In this case, CPTR_EL2.TAM has the same position with or without - * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM - * shift value for trapping the AMU accesses. - */ - - val |= CPTR_EL2_TAM; - if (update_fp_enabled(vcpu)) { if (vcpu_has_sve(vcpu)) val |= CPACR_EL1_ZEN; From patchwork Mon Feb 14 06:57:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745058 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1437EC433F5 for ; Mon, 14 Feb 2022 07:13:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=wLfrwjNlQmNTfluHLmBlrsoTRVlYMENlqhDm1ptf4gg=; b=xT+PyCZbOXI46DeQRMVvIIz+Hv mdq2zCBcq0F4q5kejf6q2H2gIXRv68InuAXfKwsQR2WZiiARC3JL0qjNxk4cNqoC5PwkV02+S7t2H xg0mz7oPvt4VG0Lr2rz6BshL6yfEuLFLdFKEEIudV/IYnUXeLfTNG8ip27CbGbovCTuo4zuZxpBWT 3TJEZtSTV0q3zeyX/QtiAfxDESJssi7B52i5kAMgRMCDf05AkbRkCqh7uR4cGt6me1vTtZXHTBv7p upQsfUeNx70RMkv49Hvye+oDt6Su2l3G9oiT/MsF3ORi64fVYNqV3eGL9XCEkpWmvlarPr42e1aJB XpLy0XxQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVWK-00DbYA-EA; Mon, 14 Feb 2022 07:11:37 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVME-00DXNg-81 for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:01:14 +0000 Received: by mail-pj1-x104a.google.com with SMTP id s10-20020a17090a948a00b001b96be201f6so6206654pjo.4 for ; Sun, 13 Feb 2022 23:01:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WYuhkQ1f9dyamGY/Zm/EszPZUHJciigBEFscTFR6Bbw=; b=bbqD228uP706InbH2h7WhJkjP0aAamFk6IHT3n76f+VHZ+W/H4+ENS0JrfgQZL1YAK rcjRdAC6qf4WWs0/bAGyzacIBmradolt+LB0UbbUEYPe6dxQ3oLRJ4TFE4IzY5zWOpfV RenjpPlONHGgGah8xZ+qo1qLNBxBoQPxlUQ1S9lnvRlZLjS4hbgYe9fmKV+P4Pe8DbNA w8XjztpgpFlmkHuTQedWE56kbeLYzQsEJ06G8MgCL3M/NH2CktBkOxGrqGyt4aqjTvUZ FrDWnVzmwBCZkIsgjQvNFPIOwucfK1af2me7YOWIsfr6/5WxaFAafdBnW9RXxQFQ4KN6 phqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WYuhkQ1f9dyamGY/Zm/EszPZUHJciigBEFscTFR6Bbw=; b=SOg0nAl94WjAFpoJZByd6ve3f6tlScvmFrPHKdyXie4bqqV98juU4N7Yu8iij1HFwQ 7AHu9Bo+PHvk7YJUMv41EeJB3LUa6/m6KP6qMDAyxIY8ci7AAOW38yNnIPy+k51HxfG5 ZHj6LeL0T5rLlnrebChVMescWaatKxA6LcVMmkfUhNJkUNdX5DCqPHeOPACYN8E6AoMn aunhlOKh6McK7od+IzUFNgRd/7d21He5nEYX1UUI4xf1Ml2nsOBu0PUIJ3NhaE/QGuLQ EAoOXu9wV1DN4N/xKuth51s+TbBmT4nKV1s3+SuATiIIleGnU96fB2HajT0eBMztNOIi xogA== X-Gm-Message-State: AOAM532S30BRiwg3vlqL4xCYhZ0UL6rl3ALM67Q2cUmkZBAQ9ylWpOc+ 5s/xc0RlIoMtg0nXU4HGz8OroDwG+3I= X-Google-Smtp-Source: ABdhPJzR9/E9zp9+mkiJY/7Sg226qKOMdx/CNBGfWEk2e+hbHLK+Z+McJBJzuxOgwkWIepeZPXePrEjL9XA= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90a:13c5:: with SMTP id s5mr13311018pjf.181.1644822069118; Sun, 13 Feb 2022 23:01:09 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:38 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-20-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 19/27] KVM: arm64: Use vcpu->arch.mdcr_el2 to track value of mdcr_el2 From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230110_318733_588C9C81 X-CRM114-Status: GOOD ( 14.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Track the baseline guest value for mdcr_el2 in struct kvm_vcpu_arch. Use this value when setting mdcr_el2 for the guest. Currently this value is unchanged, but the following patches will set trapping bits based on features supported for the guest. No functional change intended. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_arm.h | 16 ++++++++++++++++ arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/debug.c | 13 ++++--------- 3 files changed, 21 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 8ab6ea038721..4b2ac9e32a36 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -333,6 +333,22 @@ BIT(18) | \ GENMASK(16, 15)) +/* + * The default value for the guest below also clears MDCR_EL2_E2PB_MASK + * and MDCR_EL2_E2TB_MASK to disable guest access to the profiling and + * trace buffers. + */ +#define MDCR_GUEST_FLAGS_DEFAULT \ + (MDCR_EL2_TPM | MDCR_EL2_TPMS | MDCR_EL2_TTRF | \ + MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA) + +/* Bits that are copied from vcpu->arch.mdcr_el2 to set mdcr_el2 for guest. */ +#define MDCR_GUEST_FLAGS_TRACKED_MASK \ + (MDCR_EL2_TPM | MDCR_EL2_TPMS | MDCR_EL2_TTRF | \ + MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA | \ + (MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT)) + + /* For compatibility with fault code shared with 32-bit */ #define FSC_FAULT ESR_ELx_FSC_FAULT #define FSC_ACCESS ESR_ELx_FSC_ACCESS diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 7bb744bb23ce..ce7229010a78 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1182,6 +1182,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, } vcpu_reset_hcr(vcpu); + vcpu->arch.mdcr_el2 = MDCR_GUEST_FLAGS_DEFAULT; if (has_vhe()) vcpu->arch.cptr_el2 = CPTR_EL2_VHE_GUEST_DEFAULT; else diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index db9361338b2a..83330968a411 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -84,16 +84,11 @@ void kvm_arm_init_debug(void) static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu) { /* - * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK - * to disable guest access to the profiling and trace buffers + * Keep the vcpu->arch.mdcr_el2 bits that are specified by + * MDCR_GUEST_FLAGS_TRACKED_MASK. */ - vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK; - vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM | - MDCR_EL2_TPMS | - MDCR_EL2_TTRF | - MDCR_EL2_TPMCR | - MDCR_EL2_TDRA | - MDCR_EL2_TDOSA); + vcpu->arch.mdcr_el2 &= MDCR_GUEST_FLAGS_TRACKED_MASK; + vcpu->arch.mdcr_el2 |= __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK; /* Is the VM being debugged by userspace? */ if (vcpu->guest_debug) From patchwork Mon Feb 14 06:57:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D5AAEC433EF for ; Mon, 14 Feb 2022 07:14:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=R7RRbNMqPdmVCqcYNZjXxD7ECHX/K68MAUwf1Wblm4k=; b=SaK1fVcGwDRUeXjJykPey192en fmruM81rQ1usGOdAgmX2jhUUfSZX1Xion1fZCJT90fRfpgbVfEwFFQ8jYuC7GoIsIozSwg7aRpjoA 8rjIOUaoXNPXrNtTy2HFeRrT9plgd2UQIcn9pVZGA8wr6qf401uTbSA03ucTDvi0SWlC6egIiak6O xBlNCSaJ5axG87GO+pT7cVV1BM5Al1CpPT9ypEDK29iMWTwLWxbq7MOER5JxylXUSi5Zu2S4NNFzw x1OPJ7ladmZ7YomNlPgPDPk37oFuvyNllvt5HjWxYuAdWfLe1VKTUcQ7NxLn2MOPix9Xu4d2GRgTV PhaE18FQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVXN-00Dby7-Nx; Mon, 14 Feb 2022 07:12:43 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVMK-00DXR4-29 for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:01:18 +0000 Received: by mail-pj1-x1049.google.com with SMTP id y10-20020a17090a134a00b001b8b7e5983bso10294654pjf.6 for ; Sun, 13 Feb 2022 23:01:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ODKlCnJ+oDdhrPLjcew2EifH1Z7m1Qcdn/bAY4qUTjc=; b=YTh3VjXMey35X0Uzn/geCrZuF+AAYEhvErfshDJ6V6ZZx2JWhs8SBnUgu0VuE5EBpm diOU2pGfvkcvQY7H1WIwvdF6i+EVb03j9blCKF5AiBms8tDv0rCCTZNe9h/7Vm7jHB+E bZCsrTlbiw+MZQTAQ5B/ijT6XWUEoZ5JsTeNfd/oSuy8Y8wtB/gBJS3qNaTKitRKVW8l KAWDsA4zHE3P2PEPsE/BHLYkMisZmlvKn2X4KRrUuAgPLC6kAKi71OdXP5LK6D+2YGyd zTBd3dpNxjvrWksBCL1t+JBIrbebzaeZ6GpfJFSb/9/u1j0dpceP+A8rglzux2PdJ0Uj Ravw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ODKlCnJ+oDdhrPLjcew2EifH1Z7m1Qcdn/bAY4qUTjc=; b=2MlNkvfeByrSvvgYqDBEtsVwfKuWaQQASShx2uaWEnRFw6BYWMyz0if9nOYqqn26ke loDqprU/1L8BtrXfpMCR99yJZa2C5ULI1MUqwwLbiDL3ZWLhVX/swabjHiaBMmIfp9dD +HNMxpAIL+r5gFGqaXAryFK3CCDMRRIOiJjIBfu9gfHa4njx+KhW1O6pOr6NQmLmnLUy Tr6dzg8BSKkf4EPikttpVzhndLTbVpqzLrcgS69HpojoFvdwie4xyZlidCR8uW673rbB vaKycGuZLy79h79u/L1SWJxG6Z2QtJ1e86eKW6XWVquPf9XS+7cWrgo9hrepxY261P0V P5nw== X-Gm-Message-State: AOAM5324yIpRPsCuGZdi/mWmoQPb6V83JiOMGJ7hjSaYpUECUSorbGC5 IjWaW0YoEj39TokEfa0ceZ6Op2Ht8yw= X-Google-Smtp-Source: ABdhPJxLgZLNaiReuS2zrqDion211dMUzQCuxAZBqyjs12+KsRwfxViCryGR38A+IT8Nmm1ibEStEjcih80= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90b:1c81:: with SMTP id oo1mr12985775pjb.192.1644822075045; Sun, 13 Feb 2022 23:01:15 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:39 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-21-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 20/27] KVM: arm64: Introduce framework to trap disabled features From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230116_167145_AAFC8011 X-CRM114-Status: GOOD ( 30.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When a CPU feature that is supported on the host is not exposed to its guest, emulating a real CPU's behavior (by trapping or disabling guest's using the feature) is generally a desirable behavior (when it's possible without any or little side effect). Introduce feature_config_ctrl structure, which manages feature information to program configuration register to trap or disable the feature when the feature is not exposed to the guest, and functions that uses the structure to activate the vcpu's trapping the feature. Those codes don't update trap configuration registers themselves (HCR_EL2, etc) but values for the registers in kvm_vcpu_arch at the first KVM_RUN. At present, no feature has feature_config_ctrl yet and the following patches will add the feature_config_ctrl for some features. Signed-off-by: Reiji Watanabe --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/arm.c | 13 ++-- arch/arm64/kvm/sys_regs.c | 112 ++++++++++++++++++++++++++++-- 3 files changed, 117 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5e53102a1ac1..9b7fad07fcb0 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -749,6 +749,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, void set_default_id_regs(struct kvm *kvm); int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval); int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu); +void kvm_vcpu_init_traps(struct kvm_vcpu *vcpu); /* Guest/host FPSIMD coordination helpers */ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index ce7229010a78..dfd247d2746f 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -631,13 +631,16 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) static_branch_inc(&userspace_irqchip_in_use); } - /* - * Initialize traps for protected VMs. - * NOTE: Move to run in EL2 directly, rather than via a hypercall, once - * the code is in place for first run initialization at EL2. - */ + /* Initialize traps for the guest. */ if (kvm_vm_is_protected(kvm)) + /* + * NOTE: Move to run in EL2 directly, rather than via a + * hypercall, once the code is in place for first run + * initialization at EL2. + */ kvm_call_hyp_nvhe(__pkvm_vcpu_init_traps, vcpu); + else + kvm_vcpu_init_traps(vcpu); mutex_lock(&kvm->lock); kvm->arch.ran_once = true; diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 77a106d255be..faa28e7926b2 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -283,10 +283,34 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu, (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPI_SHIFT) >= \ ID_AA64ISAR1_GPI_IMP_DEF) +/* + * Feature information to program configuration register to trap or disable + * guest's using a feature when the feature is not exposed to the guest. + */ +struct feature_config_ctrl { + /* ID register/field for the feature */ + u32 ftr_reg; /* ID register */ + bool ftr_signed; /* Is the feature field signed ? */ + u8 ftr_shift; /* Field of ID register for the feature */ + s8 ftr_min; /* Min value that indicate the feature */ + + /* + * Function to check trapping is needed. This is used when the above + * fields are not enough to determine if trapping is needed. + */ + bool (*ftr_need_trap)(struct kvm_vcpu *vcpu); + + /* Function to activate trapping the feature. */ + void (*trap_activate)(struct kvm_vcpu *vcpu); +}; + struct id_reg_info { /* Register ID */ u32 sys_reg; + /* Sanitized system value */ + u64 sys_val; + /* * Limit value of the register for a vcpu. The value is the sanitized * system value with bits set/cleared for unsupported features for the @@ -328,13 +352,15 @@ struct id_reg_info { */ u64 (*vcpu_mask)(const struct kvm_vcpu *vcpu, const struct id_reg_info *id_reg); + + /* Information to trap features that are disabled for the guest */ + const struct feature_config_ctrl *(*trap_features)[]; }; static void id_reg_info_init(struct id_reg_info *id_reg) { - u64 val = read_sanitised_ftr_reg(id_reg->sys_reg); - - id_reg->vcpu_limit_val = val; + id_reg->sys_val = read_sanitised_ftr_reg(id_reg->sys_reg); + id_reg->vcpu_limit_val = id_reg->sys_val; if (id_reg->init) id_reg->init(id_reg); @@ -345,7 +371,8 @@ static void id_reg_info_init(struct id_reg_info *id_reg) * on the host. */ WARN_ON_ONCE(arm64_check_features_kvm(id_reg->sys_reg, - id_reg->vcpu_limit_val, val)); + id_reg->vcpu_limit_val, + id_reg->sys_val)); } static int validate_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, @@ -900,6 +927,24 @@ static int validate_id_reg(struct kvm_vcpu *vcpu, u32 id, u64 val) return err; } +static inline bool feature_avail(const struct feature_config_ctrl *ctrl, + u64 id_val) +{ + int field_val = cpuid_feature_extract_field(id_val, + ctrl->ftr_shift, ctrl->ftr_signed); + + return (field_val >= ctrl->ftr_min); +} + +static inline bool vcpu_feature_is_available(struct kvm_vcpu *vcpu, + const struct feature_config_ctrl *ctrl) +{ + u64 val; + + val = __read_id_reg(vcpu, ctrl->ftr_reg); + return feature_avail(ctrl, val); +} + /* * ARMv8.1 mandates at least a trivial LORegion implementation, where all the * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0 @@ -1849,6 +1894,46 @@ static int reg_from_user(u64 *val, const void __user *uaddr, u64 id); static int reg_to_user(void __user *uaddr, const u64 *val, u64 id); static u64 sys_reg_to_index(const struct sys_reg_desc *reg); +static void id_reg_features_trap_activate(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg) +{ + u64 val; + int i = 0; + const struct feature_config_ctrl **ctrlp_array, *ctrl; + + if (!id_reg || !id_reg->trap_features) + /* No information to trap a feature */ + return; + + val = __read_id_reg(vcpu, id_reg->sys_reg); + if (val == id_reg->sys_val) + /* No feature needs to be trapped (no feature is disabled). */ + return; + + ctrlp_array = *id_reg->trap_features; + while ((ctrl = ctrlp_array[i++]) != NULL) { + if (WARN_ON_ONCE(!ctrl->trap_activate)) + /* Shouldn't happen */ + continue; + + if (ctrl->ftr_need_trap && ctrl->ftr_need_trap(vcpu)) { + ctrl->trap_activate(vcpu); + continue; + } + + if (!feature_avail(ctrl, id_reg->sys_val)) + /* The feature is not supported on the host. */ + continue; + + if (feature_avail(ctrl, val)) + /* The feature is enabled for the guest. */ + continue; + + /* The feature is supported but disabled. */ + ctrl->trap_activate(vcpu); + } +} + /* Visibility overrides for SVE-specific control registers */ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) @@ -3481,6 +3566,25 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) return write_demux_regids(uindices); } +/* + * This function activates vcpu's trapping of features that are included in + * trap_features[] of id_reg_info if the features are supported on the + * host, but are hidden from the guest (i.e. values of ID registers for + * the guest are modified to not show the features' availability). + * This function just updates values for trap configuration registers (e.g. + * HCR_EL2, etc) in kvm_vcpu_arch, which will be restored before switching + * to the guest, but doesn't update the registers themselves. + * This function should be called once at the first KVM_RUN (ID registers + * are immutable after the first KVM_RUN). + */ +void kvm_vcpu_init_traps(struct kvm_vcpu *vcpu) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(id_reg_info_table); i++) + id_reg_features_trap_activate(vcpu, id_reg_info_table[i]); +} + /* ID register's fractional field information with its feature field. */ struct feature_frac { u32 id; From patchwork Mon Feb 14 06:57:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D60C8C433F5 for ; Mon, 14 Feb 2022 07:15:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=lb81kZRQ7EQaO3EG6sWcj1xSuHTZnS7/Lf9/UzaSlRs=; b=25MziwzpS5xEZF7Lg/oUapFXCP Hmk44hkfVNFWD32p+f2ab5ZoYOnLg/zihzgIhH0+ZzER6GZPPYAapCgZafNiwxqkvS07XhdpHpp+Y Ybpi9/d0LnyCgcyyxWLtxPh59iO6ZjmOck8pFYZsCXA4PNherBPfcQeIqFoa5kwXUZu/EtQzIsVi8 fl6P1id/axZ+PRW+iiL2q0erwng5adZ3dwXnqJiyhmpJpYMC6V/DLYIi14skMWcRo8QdsQS55QnRj 2iaLl9tALHHCSf80hjdcBDm+ggRTrG903CmhRNQBmTI8bOuA0YKdBCgz3JmTSpEqmiBPoCuNVrCNm Ubq/5d3A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVYX-00DcUG-RM; Mon, 14 Feb 2022 07:13:54 +0000 Received: from mail-pj1-x104a.google.com ([2607:f8b0:4864:20::104a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVMQ-00DXTe-LK for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:01:24 +0000 Received: by mail-pj1-x104a.google.com with SMTP id g14-20020a17090a7d0e00b001b8ef9f9545so13326547pjl.0 for ; Sun, 13 Feb 2022 23:01:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wbtKndzAWvCkk7WyRwv4b5BaCedyCpezxbPT2xu5RJI=; b=p6ujj/lKiFqYkdKFMxlHGT+Frbk0tdmmd9TniaJj8/tpMqYTDNEGFxATe8rnWIymL/ ObEcGY34riW2pN4p5uuyMYUwl77InvEVNgVZ8rWwF5F+vdqBJ1eHv9feQIvrJ7pjHZjG DzKl1FMeYMXz3NHD8ReCUOx1/Cvdf1acbtkqsxVZUPMiUEEHxlK/AC8q3XONnkRwT0OV 1xTHodCTqhi0ySWqpoO7EuFnqYEhzVgeQEewffTmFxNSfBIQXoP6J4a+vXCjxaCKs7UM Gg6QuYGNH4SfYvRZKxkAnCY24l1xno9QweTQnQigeQ63LR96xoCuAkm4e0RwN2/ybsgF E0Qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wbtKndzAWvCkk7WyRwv4b5BaCedyCpezxbPT2xu5RJI=; b=kREhHaxuLnUMNpn5JCNHe9u8mafr8/svMoqjDbIP761MKPgJNHROC7CMPitIICgK2R mBI+fo4QZ0MWtfVvEu5GkkmDcEd8VguFKsVscsBpgQwhMwZwV+0kMiwPI6+bjwQaQ3jY mXSeEuDFeAcWqlxK0LAsQwKsDpltr5tH+B6RpxDaDBRddsqprCnud4NqLvGumSiBP6Hs CJthwxVDc1hWV6dRK8SamLh2ksWS5YLjZwRr9ndqRw/6WcxQzAKywS+ddvtresMIdgpC 20BWB0PkbfdqfuZ0QShxMoc4SRCaX6gzab3L5Tcc5X2jYe1fkY0iNr8+KjQ8rheVca0g jaDQ== X-Gm-Message-State: AOAM531R6uagvvlpvA8FUNbsRNMskrWPoYXRw8Yh4NXjE4AkUZCI3c5B Lq06IAuaYAIiEKvPuGj1eCLsBjIbBxE= X-Google-Smtp-Source: ABdhPJwYu3eHHu7vONkLYq7yvs6BlGo8YwufFo6hGxBHerJCuXqQM+wJbzVOX7Pn89wbvQVd+ujOZB5s4qM= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90a:e40f:: with SMTP id hv15mr1640233pjb.1.1644822081267; Sun, 13 Feb 2022 23:01:21 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:40 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-22-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 21/27] KVM: arm64: Trap disabled features of ID_AA64PFR0_EL1 From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230122_772219_82AE996B X-CRM114-Status: GOOD ( 16.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add feature_config_ctrl for RAS and AMU, which are indicated in ID_AA64PFR0_EL1, to program configuration registers to trap guest's using those features when they are not exposed to the guest. Introduce trap_ras_regs() to change a behavior of guest's access to the registers, which is currently raz/wi, depending on the feature's availability for the guest (and inject undefined instruction exception when guest's RAS register access are trapped and RAS is not exposed to the guest). In order to keep the current visibility of the RAS registers from userspace (always visible), a visibility function for RAS registers is not added. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 90 +++++++++++++++++++++++++++++++++++---- 1 file changed, 82 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index faa28e7926b2..72b7cfaef41e 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -304,6 +304,63 @@ struct feature_config_ctrl { void (*trap_activate)(struct kvm_vcpu *vcpu); }; +enum vcpu_config_reg { + VCPU_HCR_EL2 = 1, + VCPU_MDCR_EL2, + VCPU_CPTR_EL2, +}; + +static void feature_trap_activate(struct kvm_vcpu *vcpu, + enum vcpu_config_reg cfg_reg, + u64 cfg_set, u64 cfg_clear) +{ + u64 *reg_ptr, reg_val; + + switch (cfg_reg) { + case VCPU_HCR_EL2: + reg_ptr = &vcpu->arch.hcr_el2; + break; + case VCPU_MDCR_EL2: + reg_ptr = &vcpu->arch.mdcr_el2; + break; + case VCPU_CPTR_EL2: + reg_ptr = &vcpu->arch.cptr_el2; + break; + } + + /* Clear/Set fields that are indicated by cfg_clear/cfg_set. */ + reg_val = (*reg_ptr & ~cfg_clear); + reg_val |= cfg_set; + *reg_ptr = reg_val; +} + +static void feature_ras_trap_activate(struct kvm_vcpu *vcpu) +{ + feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TERR | HCR_TEA, HCR_FIEN); +} + +static void feature_amu_trap_activate(struct kvm_vcpu *vcpu) +{ + feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TAM, 0); +} + +/* For ID_AA64PFR0_EL1 */ +static struct feature_config_ctrl ftr_ctrl_ras = { + .ftr_reg = SYS_ID_AA64PFR0_EL1, + .ftr_shift = ID_AA64PFR0_RAS_SHIFT, + .ftr_min = ID_AA64PFR0_RAS_V1, + .ftr_signed = FTR_UNSIGNED, + .trap_activate = feature_ras_trap_activate, +}; + +static struct feature_config_ctrl ftr_ctrl_amu = { + .ftr_reg = SYS_ID_AA64PFR0_EL1, + .ftr_shift = ID_AA64PFR0_AMU_SHIFT, + .ftr_min = ID_AA64PFR0_AMU, + .ftr_signed = FTR_UNSIGNED, + .trap_activate = feature_amu_trap_activate, +}; + struct id_reg_info { /* Register ID */ u32 sys_reg; @@ -816,6 +873,11 @@ static struct id_reg_info id_aa64pfr0_el1_info = { .init = init_id_aa64pfr0_el1_info, .validate = validate_id_aa64pfr0_el1, .vcpu_mask = vcpu_mask_id_aa64pfr0_el1, + .trap_features = &(const struct feature_config_ctrl *[]) { + &ftr_ctrl_ras, + &ftr_ctrl_amu, + NULL, + }, }; static struct id_reg_info id_aa64pfr1_el1_info = { @@ -945,6 +1007,18 @@ static inline bool vcpu_feature_is_available(struct kvm_vcpu *vcpu, return feature_avail(ctrl, val); } +static bool trap_ras_regs(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (!vcpu_feature_is_available(vcpu, &ftr_ctrl_ras)) { + kvm_inject_undefined(vcpu); + return false; + } + + return trap_raz_wi(vcpu, p, r); +} + /* * ARMv8.1 mandates at least a trivial LORegion implementation, where all the * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0 @@ -2316,14 +2390,14 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 }, { SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 }, - { SYS_DESC(SYS_ERRIDR_EL1), trap_raz_wi }, - { SYS_DESC(SYS_ERRSELR_EL1), trap_raz_wi }, - { SYS_DESC(SYS_ERXFR_EL1), trap_raz_wi }, - { SYS_DESC(SYS_ERXCTLR_EL1), trap_raz_wi }, - { SYS_DESC(SYS_ERXSTATUS_EL1), trap_raz_wi }, - { SYS_DESC(SYS_ERXADDR_EL1), trap_raz_wi }, - { SYS_DESC(SYS_ERXMISC0_EL1), trap_raz_wi }, - { SYS_DESC(SYS_ERXMISC1_EL1), trap_raz_wi }, + { SYS_DESC(SYS_ERRIDR_EL1), trap_ras_regs }, + { SYS_DESC(SYS_ERRSELR_EL1), trap_ras_regs }, + { SYS_DESC(SYS_ERXFR_EL1), trap_ras_regs }, + { SYS_DESC(SYS_ERXCTLR_EL1), trap_ras_regs }, + { SYS_DESC(SYS_ERXSTATUS_EL1), trap_ras_regs }, + { SYS_DESC(SYS_ERXADDR_EL1), trap_ras_regs }, + { SYS_DESC(SYS_ERXMISC0_EL1), trap_ras_regs }, + { SYS_DESC(SYS_ERXMISC1_EL1), trap_ras_regs }, MTE_REG(TFSR_EL1), MTE_REG(TFSRE0_EL1), From patchwork Mon Feb 14 06:57:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C4363C433EF for ; Mon, 14 Feb 2022 07:16:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=t1Qcf0GaBUamgMEZJBQmVCJQC7gnqLtn8SFweVTpeqY=; b=fJ4WzHlq693XalkFi+mCT0Ac8D xSm1AAmd2qoD3ovkOmjct0/UwNIbK5JrbUDPDvnAspZmHkouX+2pnqGgMknG1Gq2HUUSzIRz+1nWo DDRuSJhEOflbMHwqqVPeDq0VNA6lQhVy3zugyeImGKhDH2dQ2jN4B62C3tkPb7b/6lPPSfoZRZ+qs o+6qoEvhCeE8zWfAWp/+99aiwxgHkH7uGrBPu9w4LS0w8QSQtwayLYw7roHGtAYxkVEq6Cv6Wy7ZB PxsBjTtAv3VpgizwBB5rD59cYM/5Oxcu2NoY31bjAkbrTmLVWvImOLrRxwjFoxDjsJnBWl2Vcf4yt 73h0GurQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVZY-00Dcvc-Km; Mon, 14 Feb 2022 07:14:58 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVMV-00DXW3-Nc for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:01:32 +0000 Received: by mail-pl1-x64a.google.com with SMTP id j1-20020a170903028100b0014b1f9e0068so5777655plr.8 for ; Sun, 13 Feb 2022 23:01:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1nsgNzH+YpfR3/KlbrupFdkkNs/4tvboqaVeY8OSkcA=; b=Uzrf4k8tdsN8iZk1o4Z1sprsLA1ZDIULzNZcfKvd8PPrrmESS5rysT77OyuSEML4DF U+5KaNW1vZk4LEbccSoj8bsf7DakaW2MOusPTwkA6AnYJdltLtH9e+Ur/jj6FnKaSXKh w8Ygcb7Ge32vFXcMJxmtWjb0wlLab22kQyymXpYmZOE/GBiqqvZeBwz9HPTFVkQSXKZ9 2BpaE2lLws1UG9FMApBj1HgD73tjL9mrvvH8rtVIVeW4+InQUIx9yB+ESPmrBJgfSkor v698uTnpwEjjkaP3DqhEzGLB6oSx2h5oSNbJTFcf2ZsjxZ8MuLPpPlszv4nC4g9HCC5A iByg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1nsgNzH+YpfR3/KlbrupFdkkNs/4tvboqaVeY8OSkcA=; b=R5P+goYMj8lyCrjrYVuvZdDtqBi27lOjxQdKIjuvoxnv9FhhnPDcG8uRaqCZlSAAI1 NMt5rWK9f+sYRpsNB5gJ8mGIsG53M1oYfZFdPmzGM7af+BuygWiAeg1bVCAqQMz4WLpm Mu4HCvNj5naLoH7d3+t0ICfyZ/6EGqbhemGdDB2/tI6u6JGqG9pK1xP6S4AG0uUBHrVl I/7VIe6RTFOFZ1Tq1YDLUFktN1S8+poLko4/25LF47xoigooVk6PhowQSBqmEURXIHvD 0XfuXL7Eju/mPhZqEA8PNmW8+9aFumlnj/z321DTo4Xa0UNjKao8Nt9qrXGcVPmvAm0U BatQ== X-Gm-Message-State: AOAM530oNzKWqYpzU25ufK7A4kt7tyCS56NvGJmKsWDDmApj77oAyDVC YPHwkBkTRg8Qq2JDeUiqdDPJ/lbjrWY= X-Google-Smtp-Source: ABdhPJwSnhJG8X92VE6HuDLs//NrS5KMmVjqlsnEZdsMnybSnY0KcHl4l/b6B2QmMc6sHthhI0qqWsxaD2k= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90b:224d:: with SMTP id hk13mr751201pjb.183.1644822086476; Sun, 13 Feb 2022 23:01:26 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:41 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-23-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 22/27] KVM: arm64: Trap disabled features of ID_AA64PFR1_EL1 From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230127_838918_E01645A3 X-CRM114-Status: GOOD ( 12.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add feature_config_ctrl for MTE, which is indicated in ID_AA64PFR1_EL1, to program configuration register to trap the guest's using the feature when it is not exposed to the guest. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 72b7cfaef41e..a3d22f7f642b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -344,6 +344,11 @@ static void feature_amu_trap_activate(struct kvm_vcpu *vcpu) feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TAM, 0); } +static void feature_mte_trap_activate(struct kvm_vcpu *vcpu) +{ + feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TID5, HCR_DCT | HCR_ATA); +} + /* For ID_AA64PFR0_EL1 */ static struct feature_config_ctrl ftr_ctrl_ras = { .ftr_reg = SYS_ID_AA64PFR0_EL1, @@ -361,6 +366,15 @@ static struct feature_config_ctrl ftr_ctrl_amu = { .trap_activate = feature_amu_trap_activate, }; +/* For ID_AA64PFR1_EL1 */ +static struct feature_config_ctrl ftr_ctrl_mte = { + .ftr_reg = SYS_ID_AA64PFR1_EL1, + .ftr_shift = ID_AA64PFR1_MTE_SHIFT, + .ftr_min = ID_AA64PFR1_MTE_EL0, + .ftr_signed = FTR_UNSIGNED, + .trap_activate = feature_mte_trap_activate, +}; + struct id_reg_info { /* Register ID */ u32 sys_reg; @@ -885,6 +899,10 @@ static struct id_reg_info id_aa64pfr1_el1_info = { .init = init_id_aa64pfr1_el1_info, .validate = validate_id_aa64pfr1_el1, .vcpu_mask = vcpu_mask_id_aa64pfr1_el1, + .trap_features = &(const struct feature_config_ctrl *[]) { + &ftr_ctrl_mte, + NULL, + }, }; static struct id_reg_info id_aa64isar0_el1_info = { From patchwork Mon Feb 14 06:57:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745064 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 51951C433F5 for ; Mon, 14 Feb 2022 07:17:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=tm31Y/UUdH+jQ4UswLwRvNietuZs48GtxyGlK7efY2k=; b=K6OU/xkXu5DjdAU4YqAJGCxAOM AkjWwTVJUdDIug7J1UJdtxl5W3IVe7Xo8/OCF9rx/DlhWrOJgxJBtbz+wNqcygAnhcgSKMrWX8Wk3 Gnl64kNLA2GEChIHUXmArlsLK6+pTbOwdOkKi7wgTYOgQwjLNDHxZG9vbWxaikyVctM2xVqrUdBgi x99CAqNUe0vfG27xi/vOPPBmp5SgBSbRuah/41DSu5aDQ8AtVeN8KhUWh3y4repatCW7iEGPuMOD3 DgOpijNGxN7vyramPn3raufleL59N6V7o2S2dmbmxEBTLcuQb0EOG6NBs30kJ+jGXTUw4J2bXI5Ki upmJe62A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVad-00DdUE-Bv; Mon, 14 Feb 2022 07:16:04 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVMc-00DXZm-Mo for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:01:36 +0000 Received: by mail-pj1-x1049.google.com with SMTP id jf17-20020a17090b175100b001b90cf26a4eso11535859pjb.3 for ; Sun, 13 Feb 2022 23:01:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=mSRpxjHIlhptG36+PMQyXPeJOkFuTYwuZZGycm/GLdk=; b=rGVTfGKggvN6iR3mSSwoY2Pa7BNpyND9IY5sOWPcyJ/zZjbgwSGeQ0OZrFLTDuG+mJ kE5zYcxxih3QRjNxJsvfG7W4VI8AqCKQlLY9mpc3hiIpsRTLicdumsBcWspVug9vYJ0f VKdVrDoTQVOvJHv1LmX7z5YMegjua8kumBweR+NwZUMc39Q3fF9KWG/i57ZQUwVi/jC7 f5g1tDavr1EPJRBQTe9UbqyoLjf1S+0dkRy886epazldjYK2xwXWgIC3v7x22IhWiE34 hTKpwk9jvsiVnHBT+FR99JOSKm3Iz5cW3fdWEN4dpy1aj3e41aSpUPuc8Jaf5zRPhaCE +QUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mSRpxjHIlhptG36+PMQyXPeJOkFuTYwuZZGycm/GLdk=; b=aYPqXaByqBCO1o4F5i/S+qSIOKAG+GKK4onn+QnVCg+PsQ8HMCkRzDcfYcKb6o1Qgq HLK79cmnU/QOgt6o+wfaHgjmkQC0ixVutSpRt8DIiFZYeZoO9ezI7vnsdXMkmSWwMmJj AIJoCJbL3UkdRBzVMVo/dMlnJSEmDazUFrrev7hsiCzrMgcuHGsncHu9YP6027Qyzqc2 p4VQNGhcFUyOOmY+9oSK+swvyyJIWSe8Cp+dgNnbY9QiqgIDe6XlC+3D4iii8K7H1TuW YuzdFOq8SYSdPmJM96+cQlQAOeUqn3AplKD/zAHtk5c5BnLxI+8yxptq4Dn+JC7yudQ1 40jw== X-Gm-Message-State: AOAM530QaOstt3BDCHK1aNkV8fs726ElDdgTE1VRCSTZn0kcDVcJeOs8 dE15h5NOJ4YIZ/lS4MJXWf21XsYoQVo= X-Google-Smtp-Source: ABdhPJx5Hnva0pItL3HUoljdoKfaWez4J61dUM7Uj7HSG9xqpxbCfqSDFLir+dEeNS3ayFvvMXIi9kkilZQ= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:903:404a:: with SMTP id n10mr13009659pla.132.1644822093198; Sun, 13 Feb 2022 23:01:33 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:42 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-24-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 23/27] KVM: arm64: Trap disabled features of ID_AA64DFR0_EL1 From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230134_818044_370E0E6D X-CRM114-Status: GOOD ( 11.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add feature_config_ctrl for PMUv3, PMS and TraceFilt, which are indicated in ID_AA64DFR0_EL1, to program configuration registers to trap guest's using those features when they are not exposed to the guest. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 64 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index a3d22f7f642b..d91be297559d 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -349,6 +349,30 @@ static void feature_mte_trap_activate(struct kvm_vcpu *vcpu) feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TID5, HCR_DCT | HCR_ATA); } +static void feature_trace_trap_activate(struct kvm_vcpu *vcpu) +{ + if (has_vhe()) + feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPACR_EL1_TTA, 0); + else + feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TTA, 0); +} + +static void feature_pmuv3_trap_activate(struct kvm_vcpu *vcpu) +{ + feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TPM, 0); +} + +static void feature_pms_trap_activate(struct kvm_vcpu *vcpu) +{ + feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TPMS, + MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT); +} + +static void feature_tracefilt_trap_activate(struct kvm_vcpu *vcpu) +{ + feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TTRF, 0); +} + /* For ID_AA64PFR0_EL1 */ static struct feature_config_ctrl ftr_ctrl_ras = { .ftr_reg = SYS_ID_AA64PFR0_EL1, @@ -375,6 +399,39 @@ static struct feature_config_ctrl ftr_ctrl_mte = { .trap_activate = feature_mte_trap_activate, }; +/* For ID_AA64DFR0_EL1 */ +static struct feature_config_ctrl ftr_ctrl_trace = { + .ftr_reg = SYS_ID_AA64DFR0_EL1, + .ftr_shift = ID_AA64DFR0_TRACEVER_SHIFT, + .ftr_min = 1, + .ftr_signed = FTR_UNSIGNED, + .trap_activate = feature_trace_trap_activate, +}; + +static struct feature_config_ctrl ftr_ctrl_pmuv3 = { + .ftr_reg = SYS_ID_AA64DFR0_EL1, + .ftr_shift = ID_AA64DFR0_PMUVER_SHIFT, + .ftr_min = ID_AA64DFR0_PMUVER_8_0, + .ftr_signed = FTR_UNSIGNED, + .trap_activate = feature_pmuv3_trap_activate, +}; + +static struct feature_config_ctrl ftr_ctrl_pms = { + .ftr_reg = SYS_ID_AA64DFR0_EL1, + .ftr_shift = ID_AA64DFR0_PMSVER_SHIFT, + .ftr_min = ID_AA64DFR0_PMSVER_8_2, + .ftr_signed = FTR_UNSIGNED, + .trap_activate = feature_pms_trap_activate, +}; + +static struct feature_config_ctrl ftr_ctrl_tracefilt = { + .ftr_reg = SYS_ID_AA64DFR0_EL1, + .ftr_shift = ID_AA64DFR0_TRACE_FILT_SHIFT, + .ftr_min = 1, + .ftr_signed = FTR_UNSIGNED, + .trap_activate = feature_tracefilt_trap_activate, +}; + struct id_reg_info { /* Register ID */ u32 sys_reg; @@ -941,6 +998,13 @@ static struct id_reg_info id_aa64dfr0_el1_info = { .init = init_id_aa64dfr0_el1_info, .validate = validate_id_aa64dfr0_el1, .vcpu_mask = vcpu_mask_id_aa64dfr0_el1, + .trap_features = &(const struct feature_config_ctrl *[]) { + &ftr_ctrl_trace, + &ftr_ctrl_pmuv3, + &ftr_ctrl_pms, + &ftr_ctrl_tracefilt, + NULL, + }, }; static struct id_reg_info id_dfr0_el1_info = { From patchwork Mon Feb 14 06:57:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745068 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 06B3FC433F5 for ; Mon, 14 Feb 2022 07:20:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=cBqvU6bDknSTeRO7Tazk0zmRPxaYJy29Alvz78dyL4A=; b=1gRliuPnia5x5WSShBR2uTLCcN B665NHXJ5WkegFoRqhtBWZ1sTf/AkJCx0tHNSyYP2zZb7wZ6hPIYgoJ2Y8DQwV6F7RyW88aNcCmYn whwb2TNF6MH65tU00qvsUwSHv3iaBbRZfgN6N1asDPVOhpfiDlfGboFV+C+3g//Q9UElz40F7YI3K o4+hF7zOcThMXYSCulaoKOXMz6d3GG8L48F2tB/+eA2bc+SR1P8gipAUUcvgCjcg3qud5QV82Jndg Tm60IkDOHWjKDcDTvVuL6A3OWYtLSE5pPzBU98nHdNk0gUHP087iF8QAe38ooYymbJLYbEWM/EBVK qRLizNhA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVdQ-00DeXa-Qk; Mon, 14 Feb 2022 07:18:58 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVMj-00DXeP-T1 for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:01:43 +0000 Received: by mail-pj1-x1049.google.com with SMTP id a15-20020a17090ad80f00b001b8a1e1da50so11515534pjv.6 for ; Sun, 13 Feb 2022 23:01:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ju7y0INwCkVlTLrGGMqN8WpA4pAC77LT4GJuAIzw0k4=; b=dW97xLS3Zq7BUp9OFgrYXx9OPZjlDniVb0QY3oGt2NWSrc9zm8dQdio2ZyVuL11KfO bGbdHEjXjbOQMA0MrV222jVAlYgL5ubTMCDAG0JdwDvryI4nu/xDi3CWKDX3tt4SQGnF XmQJtHEpXONICXTk3SQTNp8U4j+5LYhIEF4FnKvgzDTFpxhBfznHpdtoqfCpIIAGY9MS t8gjBeJiNsXwFR/MvrHpS+tZYzu5h+v6rPDGCfaLPpTv9PzzdVe5U0+/Jf8HltyKX3qB 4oasagz52cbDQ9j9OgN4wRDWy3hvXkO/EXEzCPiy2RlF13N/gvr4te2XnQ765XXfteXV 6waw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ju7y0INwCkVlTLrGGMqN8WpA4pAC77LT4GJuAIzw0k4=; b=0ulNBRbsJaCGXbnNYJ38Wb6/NcEeQL2zf88vF093QdHpsXBkuPG1QxIDvxSPLyJCKB SS0p56S2l+nAOyHYx7BH6F+AxVKmujRpxVsKXB86v+65dG3b57v8yGp7BBRsRgv+sfNz CLQZMzrhyh2v758J/z2xcJYCv5agIi71O8fjR035pO25gGz9K4vN1QvW6RJJnwkh+hOj QAcqHwFtrf5MjP0HH6jJWgTgdgs/nXVIiAEHNPs12o3a32Ae0AAjNpRnl9SRUA8O4fAC qnqG/6PkLZSILkZx7b4AKh861OF2wSZt/0B0BJ2A/i4FefuOHatkG5BBG57ZSsFrwP5C NJaA== X-Gm-Message-State: AOAM533OT1LM0YWmhUDD+w+zbt05NRirTWzwsZRih8mil07oq2MXoIjT eKQqILON6s8wNgfLQSij4Kz6p9AV0iU= X-Google-Smtp-Source: ABdhPJw38wrkf0DY+q9xEUe+4IZIcMsK0dHMLQQ5rU+27P+/eKD5VvrZIoA2rsFN/Y4lan8hmVuWz0OYkuM= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:902:dcca:: with SMTP id t10mr12648551pll.133.1644822100107; Sun, 13 Feb 2022 23:01:40 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:43 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-25-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 24/27] KVM: arm64: Trap disabled features of ID_AA64MMFR1_EL1 From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230141_992500_57A49C61 X-CRM114-Status: GOOD ( 14.45 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add feature_config_ctrl for LORegions, which is indicated in ID_AA64MMFR1_EL1, to program configuration register to trap guest's using the feature when it is not exposed to the guest. Change trap_loregion() to use vcpu_feature_is_available() to simplify checking of the feature's availability. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index d91be297559d..205670a7d7c5 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -373,6 +373,11 @@ static void feature_tracefilt_trap_activate(struct kvm_vcpu *vcpu) feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TTRF, 0); } +static void feature_lor_trap_activate(struct kvm_vcpu *vcpu) +{ + feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TLOR, 0); +} + /* For ID_AA64PFR0_EL1 */ static struct feature_config_ctrl ftr_ctrl_ras = { .ftr_reg = SYS_ID_AA64PFR0_EL1, @@ -432,6 +437,15 @@ static struct feature_config_ctrl ftr_ctrl_tracefilt = { .trap_activate = feature_tracefilt_trap_activate, }; +/* For ID_AA64MMFR1_EL1 */ +static struct feature_config_ctrl ftr_ctrl_lor = { + .ftr_reg = SYS_ID_AA64MMFR1_EL1, + .ftr_shift = ID_AA64MMFR1_LOR_SHIFT, + .ftr_min = 1, + .ftr_signed = FTR_UNSIGNED, + .trap_activate = feature_lor_trap_activate, +}; + struct id_reg_info { /* Register ID */ u32 sys_reg; @@ -991,6 +1005,10 @@ static struct id_reg_info id_aa64mmfr0_el1_info = { static struct id_reg_info id_aa64mmfr1_el1_info = { .sys_reg = SYS_ID_AA64MMFR1_EL1, .validate = validate_id_aa64mmfr1_el1, + .trap_features = &(const struct feature_config_ctrl *[]) { + &ftr_ctrl_lor, + NULL, + }, }; static struct id_reg_info id_aa64dfr0_el1_info = { @@ -1111,10 +1129,9 @@ static bool trap_loregion(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { - u64 val = __read_id_reg(vcpu, SYS_ID_AA64MMFR1_EL1); u32 sr = reg_to_encoding(r); - if (!(val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))) { + if (!vcpu_feature_is_available(vcpu, &ftr_ctrl_lor)) { kvm_inject_undefined(vcpu); return false; } From patchwork Mon Feb 14 06:57:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7CB17C433EF for ; Mon, 14 Feb 2022 07:21:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=IgZwOzFA0eD1dG0BDbCzhs+wlCa1bSyl+dTv0Ti0H20=; b=TejFLw52AWOuzPtpsnQePUHzVs Z4UoTN0JwK8Hitp4I3YIrBxxut/5aXHl2EPaiURGXocxfFt0USWA21tXhATxA1x4xcBBNBVP3ATiK /dmAeJhdU8yccZielPiq9UkVaW2CmTr+SyiRHtmDW25O3qwZFdsqjxYbumrywTedYPe01JqD23BX7 6v8BKyNalRW1/4QNSUq83zihIvEWF8XD8Ji7uVWYLg6Ymp9Mruwl5kSZIzHdTkKrRIIfI7QYbEd5a L6yJ/Zn3IiGDVqVj7mkpHkByPDIT/JIB8HjRZ5U5vVRGw+bqFKZrl/dmqNNfbIs+1ysiXXmd8gXdt UXTBwqpA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVe6-00Dekq-Dd; Mon, 14 Feb 2022 07:19:39 +0000 Received: from mail-pl1-x64a.google.com ([2607:f8b0:4864:20::64a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVMq-00DXi4-Ih for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:01:50 +0000 Received: by mail-pl1-x64a.google.com with SMTP id u4-20020a170902a60400b0014dca32c59eso5756810plq.9 for ; Sun, 13 Feb 2022 23:01:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=l5XORbNWq/eRBF/KP+e0aDCNzGukLV10hjBR4KXniUs=; b=MJufiVJHKSyIus/sFhnoRwe2+keKwVB9kk5MzqrsJ7id6EZNMC30IKykXiNLgdTTsU VEMC26J4nsTGlzC1/XFzD+0AENUijjmcQSoMhoylRxg1CvenHLuPuZlRr7OLrHqWJzRa fvVE6sPk2yZWUhkJNd4jo5SZdek+ZjFGE8Lezs/exgfghLy7DZ1lKrblwCawnKK1oyoI RW3JtZZ9/0HQy9zbN4q1n5dcbiV0g1q9cKMrVesvzox6k0BUfqXyK/8U+zMu2pCEr/Bl pEQtrdCftz8Tl1R2z1vD79cXtgZk6KA2BcyV4qkFNaaOwqu97A8KasmUHI1X4QULMsbd 7Jog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=l5XORbNWq/eRBF/KP+e0aDCNzGukLV10hjBR4KXniUs=; b=j8GjG10ig12hfRdWuf7CncgecdgDx0uNryj+PRqRfWWkvn8Bp4kLPOpvYSdWJkQUbn yWSHeKh9oN8oAFRTrLnfkw6MiISX9N4BSJUfix7A5v9SUf2Glfrh4GKnNnHXvMyD41e3 KmkjW0inbqv0tySWUzXyNTJpUXXFG0MA1rJ9Mv9CueqRqNb98vUUOa0rLC1Ku3GCYE0k vbbjOvUhrzMKRVqebTXho59vmiWrUpYMU/K/yMmGjhVvV+ndCises/ayPAQfZOUmItgB 72tRf400MZ+NaIL8AyTUOAon0bGuGSi4UUTFDgDgUEtF6IDwKqNYozHMDaoO8SyF7AGl MfVg== X-Gm-Message-State: AOAM532TGrwq3rfi50JfFWUCE8sYTnIHCHyrEBvgPN8c3KDQivqDUXG2 SLl/MJuym8UbIYR6TrjjY6uHdvfjzQ8= X-Google-Smtp-Source: ABdhPJwJkw68YJr5to3yBrgNhXZsf8uxRX0mR+fW+ct2ISB7LByn8kkySbfysY2JTpFORc7JKGvY52yitHY= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90b:4c8e:: with SMTP id my14mr1640945pjb.0.1644822106646; Sun, 13 Feb 2022 23:01:46 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:44 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-26-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 25/27] KVM: arm64: Trap disabled features of ID_AA64ISAR1_EL1 From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230148_660988_20701F25 X-CRM114-Status: GOOD ( 14.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add feature_config_ctrl for PTRAUTH, which is indicated in ID_AA64ISAR1_EL1, to program configuration register to trap guest's using the feature when it is not exposed to the guest. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 205670a7d7c5..562f9b28767a 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -283,6 +283,30 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu, (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPI_SHIFT) >= \ ID_AA64ISAR1_GPI_IMP_DEF) +/* + * Return true if ptrauth needs to be trapped. + * (i.e. if ptrauth is supported on the host but not exposed to the guest) + */ +static bool vcpu_need_trap_ptrauth(struct kvm_vcpu *vcpu) +{ + u64 val; + bool generic, address; + + if (!system_has_full_ptr_auth()) + /* The feature is not supported. */ + return false; + + val = __read_id_reg(vcpu, SYS_ID_AA64ISAR1_EL1); + generic = aa64isar1_has_gpi(val) || aa64isar1_has_gpa(val); + address = aa64isar1_has_api(val) || aa64isar1_has_apa(val); + if (generic && address) + /* The feature is available. */ + return false; + + /* The feature is supported but hidden. */ + return true; +} + /* * Feature information to program configuration register to trap or disable * guest's using a feature when the feature is not exposed to the guest. @@ -378,6 +402,11 @@ static void feature_lor_trap_activate(struct kvm_vcpu *vcpu) feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TLOR, 0); } +static void feature_ptrauth_trap_activate(struct kvm_vcpu *vcpu) +{ + feature_trap_activate(vcpu, VCPU_HCR_EL2, 0, HCR_API | HCR_APK); +} + /* For ID_AA64PFR0_EL1 */ static struct feature_config_ctrl ftr_ctrl_ras = { .ftr_reg = SYS_ID_AA64PFR0_EL1, @@ -446,6 +475,12 @@ static struct feature_config_ctrl ftr_ctrl_lor = { .trap_activate = feature_lor_trap_activate, }; +/* For SYS_ID_AA64ISAR1_EL1 */ +static struct feature_config_ctrl ftr_ctrl_ptrauth = { + .ftr_need_trap = vcpu_need_trap_ptrauth, + .trap_activate = feature_ptrauth_trap_activate, +}; + struct id_reg_info { /* Register ID */ u32 sys_reg; @@ -986,6 +1021,10 @@ static struct id_reg_info id_aa64isar1_el1_info = { .init = init_id_aa64isar1_el1_info, .validate = validate_id_aa64isar1_el1, .vcpu_mask = vcpu_mask_id_aa64isar1_el1, + .trap_features = &(const struct feature_config_ctrl *[]) { + &ftr_ctrl_ptrauth, + NULL, + }, }; static struct id_reg_info id_aa64mmfr0_el1_info = { From patchwork Mon Feb 14 06:57:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745070 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DA0D0C433EF for ; Mon, 14 Feb 2022 07:23:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=fUqNK+P96/HnUXUDxGn6NSlCfYljAAEYB5LJUT3I64g=; b=1dEiuyiuRBZcf9NfxftGnmvk2v RdpjPQHOOTraetrj4Lv7mzjGyfwxXC1naXW8ZVl/LwilUJ+0UhdReBAdjkmF3wd7Jztco74tBhmui gizwjxDVdOKBfz13yc1Zy9jYWC8TYb90Nhw024I6KIw2U1Yi/NfBNbm/CL9DGV4tpjW/tCWgBcNPS I/WxAiLOzxcZhsFMnmIq8YV2e6EdVe04Npj9kWetxRRsZDWjS3MzI0O+B9tjEl8R7GL1pCKxTdEtm SRFLYZI+LMmhZab+v5qMBxdw7+jZD1zkUoEAhoMg8X52G0EfcxtDx5rofabMAyqfIlB2O6cdpNO9u t4hsUA9w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVgM-00Dfab-Rm; Mon, 14 Feb 2022 07:21:59 +0000 Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVMw-00DXkQ-PM for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:02:00 +0000 Received: by mail-pf1-x44a.google.com with SMTP id v207-20020a627ad8000000b004e0bb852117so4177387pfc.0 for ; Sun, 13 Feb 2022 23:01:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WABpAU2qJkR+sw2cfBDI8FHTBtRLd4dMUYVQ4LTw+8g=; b=egG8vVaEUevzkZVdPaLHtKQAzmIgZvBcVp8ieR2m6tLmQMm2jlr7pTCRgxjTqekbsu 2jyMfLERly9YpBSfK/EBG62D6RMBp7HeixQgR0F+UWY19HM4aJ7e7HzDtJPhVlxibLpD 7Jz+IDHi9HSO/OKLz1wNCSTBrdCqyObCHZNWlEteoZ6IZZ0lkR+QkrqKPgxm6vhSV+dB AmS/AaK/5a9cPCefsXhaTb8SRJiCnWSz0fOgRRTdDAAi8Bw+XLQ8Mpf5CkKIJX6fv8+S s96RtXJd3i9wRGmA9jTycTV/Ie+IdDlk7EKelIP56POUTq2mjVON/+DWt5iti4bXHxa5 CYpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WABpAU2qJkR+sw2cfBDI8FHTBtRLd4dMUYVQ4LTw+8g=; b=WrQ/Ha1E2N1OLrJ45GQWJg5b/fo+yddKSvGOYM+4vtwSavPrkYhiQ0HOAn9ZsXRp8U CHXJYdF0RZQ44+SxnpWQSk2g/YY4jveb9Jd1dgP7/FUzZmsiSpaE0Fj55kGfBSJpu8kx DIxi6m4wDiB7D4JLo0pXGvl/g26dMoD53YQrceFC16En6OxiAAWepGgIx9Zt7VfBge38 JoXNrV5enzuoxoWyOGoPhCRewlbdGjFPr/2VcfcWaG4aNOFKRkAAvVzcE35KVETdTQyL h0ACD+dTkYMf16hy394JtW4+McG9VG+E/O/01BIZ1CVA4xcACuzvwMLpFvK34BSQ749t jgwg== X-Gm-Message-State: AOAM5318MGL6h7aAO9jjDg6fsG8it6NXL0+LK5n+hQnBSOPLlWyjSlCe KqV62OyWC4cSCOGWTAFElg+0gbJXgpU= X-Google-Smtp-Source: ABdhPJyvylkGqpuhgkE8kDj+Z2dPsaGDCw0umQZ4l2AioHsBizRpURLxEI5OPwq6awe+SzAOYrvdOOoekx8= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90b:4d91:: with SMTP id oj17mr9258241pjb.236.1644822112637; Sun, 13 Feb 2022 23:01:52 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:45 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-27-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 26/27] KVM: arm64: Add kunit test for trap initialization From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230154_925305_95195429 X-CRM114-Status: GOOD ( 17.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add KUnit tests for functions in arch/arm64/kvm/sys_regs_test.c that activates traps for disabled features. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs_test.c | 262 +++++++++++++++++++++++++++++++++ 1 file changed, 262 insertions(+) diff --git a/arch/arm64/kvm/sys_regs_test.c b/arch/arm64/kvm/sys_regs_test.c index 30603d0623cd..391e13579ca7 100644 --- a/arch/arm64/kvm/sys_regs_test.c +++ b/arch/arm64/kvm/sys_regs_test.c @@ -991,6 +991,265 @@ static void validate_id_reg_test(struct kunit *test) GET_ID_REG_INFO(id) = original_idr; } +struct trap_config_test { + u64 set; + u64 clear; + u64 prev_val; + u64 expect_val; +}; + +struct trap_config_test trap_params[] = { + {0x30000800000, 0, 0, 0x30000800000}, + {0, 0x30000800000, 0, 0}, + {0x30000800000, 0, (u64)-1, (u64)-1}, + {0, 0x30000800000, (u64)-1, (u64)0xfffffcffff7fffff}, +}; + +static void trap_case_to_desc(struct trap_config_test *t, char *desc) +{ + snprintf(desc, KUNIT_PARAM_DESC_SIZE, + "trap - set:0x%llx, clear:0x%llx, prev_val:0x%llx\n", + t->set, t->clear, t->prev_val); +} + +KUNIT_ARRAY_PARAM(trap, trap_params, trap_case_to_desc); + +/* Tests for feature_trap_activate(). */ +static void feature_trap_activate_test(struct kunit *test) +{ + struct kvm_vcpu *vcpu; + const struct trap_config_test *trap = test->param_value; + + vcpu = test_kvm_vcpu_init(test); + KUNIT_ASSERT_TRUE(test, vcpu); + + /* Test for HCR_EL2 */ + vcpu->arch.hcr_el2 = trap->prev_val; + feature_trap_activate(vcpu, VCPU_HCR_EL2, trap->set, trap->clear); + KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, trap->expect_val); + + /* Test for MDCR_EL2 */ + vcpu->arch.mdcr_el2 = trap->prev_val; + feature_trap_activate(vcpu, VCPU_MDCR_EL2, trap->set, trap->clear); + KUNIT_EXPECT_EQ(test, vcpu->arch.mdcr_el2, trap->expect_val); + + /* Test for CPTR_EL2 */ + vcpu->arch.cptr_el2 = trap->prev_val; + feature_trap_activate(vcpu, VCPU_CPTR_EL2, trap->set, trap->clear); + KUNIT_EXPECT_EQ(test, vcpu->arch.cptr_el2, trap->expect_val); +} + +static u64 test_trap_set0; +static u64 test_trap_clear0; +static void test_trap_activate0(struct kvm_vcpu *vcpu) +{ + feature_trap_activate(vcpu, VCPU_HCR_EL2, + test_trap_set0, test_trap_clear0); +} + +static u64 test_trap_set1; +static u64 test_trap_clear1; +static void test_trap_activate1(struct kvm_vcpu *vcpu) +{ + feature_trap_activate(vcpu, VCPU_HCR_EL2, + test_trap_set1, test_trap_clear1); +} + +static u64 test_trap_set2; +static u64 test_trap_clear2; +static void test_trap_activate2(struct kvm_vcpu *vcpu) +{ + feature_trap_activate(vcpu, VCPU_HCR_EL2, + test_trap_set2, test_trap_clear2); +} + + +static void setup_feature_config_ctrl(struct feature_config_ctrl *config, + u32 id, int shift, int min, bool sign, + void *fn) +{ + memset(config, 0, sizeof(*config)); + config->ftr_reg = id; + config->ftr_shift = shift; + config->ftr_min = min; + config->ftr_signed = sign; + config->trap_activate = fn; +} + +/* + * Tests for id_reg_features_trap_activate. + * Setup a id_reg_info with three entries in id_reg_info->trap_features[]. + * Check if the config register is updated to enable trap for the disabled + * features. + */ +static void id_reg_features_trap_activate_test(struct kunit *test) +{ + struct kvm_vcpu *vcpu; + u32 id; + u64 cfg_set, cfg_clear, id_reg_sys_val, id_reg_val; + struct id_reg_info id_reg_data; + struct feature_config_ctrl config0, config1, config2; + struct feature_config_ctrl *trap_features[] = { + &config0, &config1, &config2, NULL, + }; + + vcpu = test_kvm_vcpu_init(test); + KUNIT_EXPECT_TRUE(test, vcpu); + if (!vcpu) + return; + + /* Setup id_reg_info */ + id_reg_sys_val = 0x7777777777777777; + id = SYS_ID_AA64DFR0_EL1; + id_reg_data.sys_reg = id; + id_reg_data.sys_val = id_reg_sys_val; + id_reg_data.vcpu_limit_val = (u64)-1; + id_reg_data.trap_features = + (const struct feature_config_ctrl *(*)[])trap_features; + + /* Setup the 1st feature_config_ctrl */ + test_trap_set0 = 0x3; + test_trap_clear0 = 0x0; + setup_feature_config_ctrl(&config0, id, 60, 2, FTR_UNSIGNED, + &test_trap_activate0); + + /* Setup the 2nd feature_config_ctrl */ + test_trap_set1 = 0x30000040; + test_trap_clear1 = 0x40000000; + setup_feature_config_ctrl(&config1, id, 0, 1, FTR_UNSIGNED, + &test_trap_activate1); + + /* Setup the 3rd feature_config_ctrl */ + test_trap_set2 = 0x30000000800; + test_trap_clear2 = 0x40000000000; + setup_feature_config_ctrl(&config2, id, 4, 0, FTR_SIGNED, + &test_trap_activate2); + +#define ftr_dis(cfg) \ + ((u64)(((cfg)->ftr_min - 1) & 0xf) << (cfg)->ftr_shift) + +#define ftr_en(cfg) \ + ((u64)(cfg)->ftr_min << (cfg)->ftr_shift) + + /* Test with features enabled for config0, 1 and 2 */ + id_reg_val = ftr_en(&config0) | ftr_en(&config1) | ftr_en(&config2); + write_kvm_id_reg(vcpu->kvm, id, id_reg_val); + vcpu->arch.hcr_el2 = 0; + id_reg_features_trap_activate(vcpu, &id_reg_data); + KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, 0); + + + /* Test with features disabled for config0 only */ + id_reg_val = ftr_dis(&config0) | ftr_en(&config1) | ftr_en(&config2); + write_kvm_id_reg(vcpu->kvm, id, id_reg_val); + vcpu->arch.hcr_el2 = 0; + cfg_set = test_trap_set0; + cfg_clear = test_trap_clear0; + + id_reg_features_trap_activate(vcpu, &id_reg_data); + KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set); + KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0); + + + /* Test with features disabled for config0 and config1 */ + id_reg_val = ftr_dis(&config0) | ftr_dis(&config1) | ftr_en(&config2); + write_kvm_id_reg(vcpu->kvm, id, id_reg_val); + vcpu->arch.hcr_el2 = 0; + + cfg_set = test_trap_set0 | test_trap_set1; + cfg_clear = test_trap_clear0 | test_trap_clear1; + + id_reg_features_trap_activate(vcpu, &id_reg_data); + KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set); + KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0); + + + /* Test with features disabled for config0, config1, and config2 */ + id_reg_val = ftr_dis(&config0) | ftr_dis(&config1) | ftr_dis(&config2); + write_kvm_id_reg(vcpu->kvm, id, id_reg_val); + vcpu->arch.hcr_el2 = 0; + + cfg_set = test_trap_set0 | test_trap_set1 | test_trap_set2; + cfg_clear = test_trap_clear0 | test_trap_clear1 | test_trap_clear2; + + id_reg_features_trap_activate(vcpu, &id_reg_data); + KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set); + KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0); + + + /* Test with id_reg_info == NULL */ + vcpu->arch.hcr_el2 = 0; + id_reg_features_trap_activate(vcpu, NULL); + KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, 0); + + + /* Test with id_reg_data.trap_features = NULL */ + id_reg_data.trap_features = NULL; + vcpu->arch.hcr_el2 = 0; + id_reg_features_trap_activate(vcpu, &id_reg_data); + KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, 0); + +} + +/* Tests for vcpu_need_trap_ptrauth(). */ +static void vcpu_need_trap_ptrauth_test(struct kunit *test) +{ + struct kvm_vcpu *vcpu; + u32 id = SYS_ID_AA64ISAR1_EL1; + + vcpu = test_kvm_vcpu_init(test); + KUNIT_EXPECT_TRUE(test, vcpu); + if (!vcpu) + return; + + if (system_has_full_ptr_auth()) { + /* Tests with PTRAUTH disabled vCPU */ + write_kvm_id_reg(vcpu->kvm, id, 0x0); + KUNIT_EXPECT_TRUE(test, vcpu_need_trap_ptrauth(vcpu)); + + /* GPI = 1, API = 1 */ + write_kvm_id_reg(vcpu->kvm, id, 0x10000100); + KUNIT_EXPECT_TRUE(test, vcpu_need_trap_ptrauth(vcpu)); + + /* GPI = 1, APA = 1 */ + write_kvm_id_reg(vcpu->kvm, id, 0x10000010); + KUNIT_EXPECT_TRUE(test, vcpu_need_trap_ptrauth(vcpu)); + + /* GPA = 1, API = 1 */ + write_kvm_id_reg(vcpu->kvm, id, 0x01000100); + KUNIT_EXPECT_TRUE(test, vcpu_need_trap_ptrauth(vcpu)); + + /* GPA = 1, APA = 1 */ + write_kvm_id_reg(vcpu->kvm, id, 0x01000010); + KUNIT_EXPECT_TRUE(test, vcpu_need_trap_ptrauth(vcpu)); + + + /* Tests with PTRAUTH enabled vCPU */ + vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH; + + write_kvm_id_reg(vcpu->kvm, id, 0x0); + KUNIT_EXPECT_TRUE(test, vcpu_need_trap_ptrauth(vcpu)); + + /* GPI = 1, API = 1 */ + write_kvm_id_reg(vcpu->kvm, id, 0x10000100); + KUNIT_EXPECT_FALSE(test, vcpu_need_trap_ptrauth(vcpu)); + + /* GPI = 1, APA = 1 */ + write_kvm_id_reg(vcpu->kvm, id, 0x10000010); + KUNIT_EXPECT_FALSE(test, vcpu_need_trap_ptrauth(vcpu)); + + /* GPA = 1, API = 1 */ + write_kvm_id_reg(vcpu->kvm, id, 0x01000100); + KUNIT_EXPECT_FALSE(test, vcpu_need_trap_ptrauth(vcpu)); + + /* GPA = 1, APA = 1 */ + write_kvm_id_reg(vcpu->kvm, id, 0x01000010); + KUNIT_EXPECT_FALSE(test, vcpu_need_trap_ptrauth(vcpu)); + } else { + KUNIT_EXPECT_FALSE(test, vcpu_need_trap_ptrauth(vcpu)); + } +} + static struct kunit_case kvm_sys_regs_test_cases[] = { KUNIT_CASE_PARAM(vcpu_id_reg_feature_frac_check_test, frac_gen_params), KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran4_2_gen_params), @@ -1006,6 +1265,9 @@ static struct kunit_case kvm_sys_regs_test_cases[] = { KUNIT_CASE(validate_id_dfr0_el1_test), KUNIT_CASE(validate_mvfr1_el1_test), KUNIT_CASE(validate_id_reg_test), + KUNIT_CASE(vcpu_need_trap_ptrauth_test), + KUNIT_CASE_PARAM(feature_trap_activate_test, trap_gen_params), + KUNIT_CASE(id_reg_features_trap_activate_test), {} }; From patchwork Mon Feb 14 06:57:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12745071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DB77AC433F5 for ; Mon, 14 Feb 2022 07:25:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=EhIE5NgXy77X7YapIUEy7PqaaqyQgwrHp4T2T+ObYoI=; b=ndb69onm/4PjtjEdcM5cF41Jz7 rNZwvpY6yCJvIT/Jm0AXt+CAqlU+YJxob2YE2megZOGdLJ1Sulfl3pQu1iygZi9I3tbRkD72MmY08 SNM5zNJLdjY6CeCRSIb6n2MckTYUJkB1AociJRp1xgFE5YkZ5fWyfEmcJKBKDJ10KjQFOkZH9kwbn 1+pE/vxpGZYNl+DFKa3VOGY+SxZffzXayGc+UOIS+/b2aouUuVTr1uDpWYMeiVwDy7H31/vmvQOTs TCaviNHNuIkz6SElhLJMZNPq9jcKbrm8DD/ZR+Hebh1QsQCPPXWuIkn4C7jCM8CSVpcBlov2QWq8A xY35dbxg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVia-00DgIu-SO; Mon, 14 Feb 2022 07:24:17 +0000 Received: from mail-pj1-x1049.google.com ([2607:f8b0:4864:20::1049]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nJVN4-00DXnV-Fj for linux-arm-kernel@lists.infradead.org; Mon, 14 Feb 2022 07:02:09 +0000 Received: by mail-pj1-x1049.google.com with SMTP id c6-20020a17090a020600b001b9b44c18aaso3101926pjc.4 for ; Sun, 13 Feb 2022 23:02:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wDNL2R6b/uxvQNAartUduvQ+Nf4X45a++6d5c+9WGxE=; b=thWct39vJMCIgHVzQJjiRJbdgmYs/vqjjaxHRDD5XnukRMxXUnfg9ZGUkP1QAek2ey B2pyhSN0SlFo9bOMyPoC4buwI87WCWNj90qexrXp7OPEPqj2y04FUf1q5je4FVORU21F 2BKdBoY1ti0EDD9WbfiziXPX/FPRRFr3CT7x26Wp/h9YAP9F4tZSG2RoP3gN3RaJ6iAF 5ZWKwmd1HkUuwo7nRIByNR7k+VBDsEf6RWa7Syl7Di3ZqH4UqYl58NFEUWHgfVm6ntS2 v4kdD2VsBW3dWAZ9zM3cwBlT7So119V1sUm9NEjgyIUhwMisYpqF/MsWIoKLIYjGXpDa LzKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wDNL2R6b/uxvQNAartUduvQ+Nf4X45a++6d5c+9WGxE=; b=qahOkBfm6ACuA3swyaMFLzbxHkxNplpWUiyx7HjmxLM5eyXAN6xUmxDz3elqpOtht3 KwSHtLi2lReJuGgkd57WVxEfefE8/hh6uKUbgUX2ADovVOy5LTPBE7eM8cHmKw/YE7zd xM8GiKLcx63x31ujmSu05W2XN7w/DSkx1vUgRyOz6AZ5gjYzfU9+uX6/lLyXv1qzuLIW OxZL3L5Buu637ihbYCIMMG8x6jsx7cEG7HQtvNHyaqLlQUHAgJxvSpSauCRoM9R03jYu LL8ljryk2058+e7+A9mLc3HCI2KclnmWxntRt8QoW+pWE5SwubHJFwKU3s4NkeE4KzTO pVTw== X-Gm-Message-State: AOAM532emmr48XhX6XNfbvm1EQdnf8i6f5vWNBi5fLFdfM2+BPAyjVmS HqaCRWq9Kp5dgI5X1uR0SZ3aHXV8fcQ= X-Google-Smtp-Source: ABdhPJxSO1Vgn/Xn3fh2EROBsHrpgBOXYiLbD5dMMLa/1rr9IeSH93zuLETmn6/4vrSHw6pkoL5Uj5i5ohY= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90b:4c0b:: with SMTP id na11mr13105933pjb.217.1644822120659; Sun, 13 Feb 2022 23:02:00 -0800 (PST) Date: Sun, 13 Feb 2022 22:57:46 -0800 In-Reply-To: <20220214065746.1230608-1-reijiw@google.com> Message-Id: <20220214065746.1230608-28-reijiw@google.com> Mime-Version: 1.0 References: <20220214065746.1230608-1-reijiw@google.com> X-Mailer: git-send-email 2.35.1.265.g69c8d7142f-goog Subject: [PATCH v5 27/27] KVM: arm64: selftests: Introduce id_reg_test From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Fuad Tabba , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220213_230202_733220_CCC8FB40 X-CRM114-Status: GOOD ( 21.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introduce a test for aarch64 to validate basic behavior of KVM_GET_ONE_REG and KVM_SET_ONE_REG for ID registers. This test runs only when KVM_CAP_ARM_ID_REG_CONFIGURABLE is supported. Signed-off-by: Reiji Watanabe --- tools/arch/arm64/include/asm/sysreg.h | 1 + tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/aarch64/id_reg_test.c | 1239 +++++++++++++++++ 4 files changed, 1242 insertions(+) create mode 100644 tools/testing/selftests/kvm/aarch64/id_reg_test.c diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h index 7640fa27be94..be3947c125f1 100644 --- a/tools/arch/arm64/include/asm/sysreg.h +++ b/tools/arch/arm64/include/asm/sysreg.h @@ -793,6 +793,7 @@ #define ID_AA64PFR0_ELx_32BIT_64BIT 0x2 /* id_aa64pfr1 */ +#define ID_AA64PFR1_CSV2FRAC_SHIFT 32 #define ID_AA64PFR1_MPAMFRAC_SHIFT 16 #define ID_AA64PFR1_RASFRAC_SHIFT 12 #define ID_AA64PFR1_MTE_SHIFT 8 diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index dce7de7755e6..c82c1978d5bb 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -2,6 +2,7 @@ /aarch64/arch_timer /aarch64/debug-exceptions /aarch64/get-reg-list +/aarch64/id_reg_test /aarch64/psci_cpu_on_test /aarch64/vgic_init /aarch64/vgic_irq diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 0e4926bc9a58..e713b26b21fc 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -103,6 +103,7 @@ TEST_GEN_PROGS_x86_64 += system_counter_offset_test TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list +TEST_GEN_PROGS_aarch64 += aarch64/id_reg_test TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq diff --git a/tools/testing/selftests/kvm/aarch64/id_reg_test.c b/tools/testing/selftests/kvm/aarch64/id_reg_test.c new file mode 100644 index 000000000000..917abe951170 --- /dev/null +++ b/tools/testing/selftests/kvm/aarch64/id_reg_test.c @@ -0,0 +1,1239 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * id_reg_test.c - Tests reading/writing the aarch64's ID registers + * + * The test validates KVM_SET_ONE_REG/KVM_GET_ONE_REG ioctl for ID + * registers as well as reading ID register from the guest works fine. + * + * Copyright (c) 2022, Google LLC. + */ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "processor.h" +#include "vgic.h" + +/* Reserved ID registers */ +#define SYS_ID_REG_3_3_EL1 sys_reg(3, 0, 0, 3, 3) +#define SYS_ID_REG_3_7_EL1 sys_reg(3, 0, 0, 3, 7) + +#define SYS_ID_REG_4_2_EL1 sys_reg(3, 0, 0, 4, 2) +#define SYS_ID_REG_4_3_EL1 sys_reg(3, 0, 0, 4, 3) +#define SYS_ID_REG_4_5_EL1 sys_reg(3, 0, 0, 4, 5) +#define SYS_ID_REG_4_6_EL1 sys_reg(3, 0, 0, 4, 6) +#define SYS_ID_REG_4_7_EL1 sys_reg(3, 0, 0, 4, 7) + +#define SYS_ID_REG_5_2_EL1 sys_reg(3, 0, 0, 5, 2) +#define SYS_ID_REG_5_3_EL1 sys_reg(3, 0, 0, 5, 3) +#define SYS_ID_REG_5_6_EL1 sys_reg(3, 0, 0, 5, 6) +#define SYS_ID_REG_5_7_EL1 sys_reg(3, 0, 0, 5, 7) + +#define SYS_ID_REG_6_2_EL1 sys_reg(3, 0, 0, 6, 2) +#define SYS_ID_REG_6_3_EL1 sys_reg(3, 0, 0, 6, 3) +#define SYS_ID_REG_6_4_EL1 sys_reg(3, 0, 0, 6, 4) +#define SYS_ID_REG_6_5_EL1 sys_reg(3, 0, 0, 6, 5) +#define SYS_ID_REG_6_6_EL1 sys_reg(3, 0, 0, 6, 6) +#define SYS_ID_REG_6_7_EL1 sys_reg(3, 0, 0, 6, 7) + +#define SYS_ID_REG_7_3_EL1 sys_reg(3, 0, 0, 7, 3) +#define SYS_ID_REG_7_4_EL1 sys_reg(3, 0, 0, 7, 4) +#define SYS_ID_REG_7_5_EL1 sys_reg(3, 0, 0, 7, 5) +#define SYS_ID_REG_7_6_EL1 sys_reg(3, 0, 0, 7, 6) +#define SYS_ID_REG_7_7_EL1 sys_reg(3, 0, 0, 7, 7) + +#define READ_ID_REG_FN(name) read_## name ## _EL1 + +#define DEFINE_READ_SYS_REG(reg_name) \ +uint64_t read_##reg_name(void) \ +{ \ + return read_sysreg_s(SYS_##reg_name); \ +} + +#define DEFINE_READ_ID_REG(name) \ + DEFINE_READ_SYS_REG(name ## _EL1) + +#define __ID_REG(reg_name) \ + .name = #reg_name, \ + .id = SYS_## reg_name ##_EL1, \ + .read_reg = READ_ID_REG_FN(reg_name), + +#define ID_REG_ENT(reg_name) \ + [ID_IDX(reg_name)] = { __ID_REG(reg_name) } + +/* Functions to read each ID register */ +/* CRm=1 */ +DEFINE_READ_ID_REG(ID_PFR0) +DEFINE_READ_ID_REG(ID_PFR1) +DEFINE_READ_ID_REG(ID_DFR0) +DEFINE_READ_ID_REG(ID_AFR0) +DEFINE_READ_ID_REG(ID_MMFR0) +DEFINE_READ_ID_REG(ID_MMFR1) +DEFINE_READ_ID_REG(ID_MMFR2) +DEFINE_READ_ID_REG(ID_MMFR3) + +/* CRm=2 */ +DEFINE_READ_ID_REG(ID_ISAR0) +DEFINE_READ_ID_REG(ID_ISAR1) +DEFINE_READ_ID_REG(ID_ISAR2) +DEFINE_READ_ID_REG(ID_ISAR3) +DEFINE_READ_ID_REG(ID_ISAR4) +DEFINE_READ_ID_REG(ID_ISAR5) +DEFINE_READ_ID_REG(ID_MMFR4) +DEFINE_READ_ID_REG(ID_ISAR6) + +/* CRm=3 */ +DEFINE_READ_ID_REG(MVFR0) +DEFINE_READ_ID_REG(MVFR1) +DEFINE_READ_ID_REG(MVFR2) +DEFINE_READ_ID_REG(ID_REG_3_3) +DEFINE_READ_ID_REG(ID_PFR2) +DEFINE_READ_ID_REG(ID_DFR1) +DEFINE_READ_ID_REG(ID_MMFR5) +DEFINE_READ_ID_REG(ID_REG_3_7) + +/* CRm=4 */ +DEFINE_READ_ID_REG(ID_AA64PFR0) +DEFINE_READ_ID_REG(ID_AA64PFR1) +DEFINE_READ_ID_REG(ID_REG_4_2) +DEFINE_READ_ID_REG(ID_REG_4_3) +DEFINE_READ_ID_REG(ID_AA64ZFR0) +DEFINE_READ_ID_REG(ID_REG_4_5) +DEFINE_READ_ID_REG(ID_REG_4_6) +DEFINE_READ_ID_REG(ID_REG_4_7) + +/* CRm=5 */ +DEFINE_READ_ID_REG(ID_AA64DFR0) +DEFINE_READ_ID_REG(ID_AA64DFR1) +DEFINE_READ_ID_REG(ID_REG_5_2) +DEFINE_READ_ID_REG(ID_REG_5_3) +DEFINE_READ_ID_REG(ID_AA64AFR0) +DEFINE_READ_ID_REG(ID_AA64AFR1) +DEFINE_READ_ID_REG(ID_REG_5_6) +DEFINE_READ_ID_REG(ID_REG_5_7) + +/* CRm=6 */ +DEFINE_READ_ID_REG(ID_AA64ISAR0) +DEFINE_READ_ID_REG(ID_AA64ISAR1) +DEFINE_READ_ID_REG(ID_REG_6_2) +DEFINE_READ_ID_REG(ID_REG_6_3) +DEFINE_READ_ID_REG(ID_REG_6_4) +DEFINE_READ_ID_REG(ID_REG_6_5) +DEFINE_READ_ID_REG(ID_REG_6_6) +DEFINE_READ_ID_REG(ID_REG_6_7) + +/* CRm=7 */ +DEFINE_READ_ID_REG(ID_AA64MMFR0) +DEFINE_READ_ID_REG(ID_AA64MMFR1) +DEFINE_READ_ID_REG(ID_AA64MMFR2) +DEFINE_READ_ID_REG(ID_REG_7_3) +DEFINE_READ_ID_REG(ID_REG_7_4) +DEFINE_READ_ID_REG(ID_REG_7_5) +DEFINE_READ_ID_REG(ID_REG_7_6) +DEFINE_READ_ID_REG(ID_REG_7_7) + +#define ID_IDX(name) REG_IDX_## name + +enum id_reg_idx { + /* CRm=1 */ + ID_IDX(ID_PFR0) = 0, + ID_IDX(ID_PFR1), + ID_IDX(ID_DFR0), + ID_IDX(ID_AFR0), + ID_IDX(ID_MMFR0), + ID_IDX(ID_MMFR1), + ID_IDX(ID_MMFR2), + ID_IDX(ID_MMFR3), + + /* CRm=2 */ + ID_IDX(ID_ISAR0), + ID_IDX(ID_ISAR1), + ID_IDX(ID_ISAR2), + ID_IDX(ID_ISAR3), + ID_IDX(ID_ISAR4), + ID_IDX(ID_ISAR5), + ID_IDX(ID_MMFR4), + ID_IDX(ID_ISAR6), + + /* CRm=3 */ + ID_IDX(MVFR0), + ID_IDX(MVFR1), + ID_IDX(MVFR2), + ID_IDX(ID_REG_3_3), + ID_IDX(ID_PFR2), + ID_IDX(ID_DFR1), + ID_IDX(ID_MMFR5), + ID_IDX(ID_REG_3_7), + + /* CRm=4 */ + ID_IDX(ID_AA64PFR0), + ID_IDX(ID_AA64PFR1), + ID_IDX(ID_REG_4_2), + ID_IDX(ID_REG_4_3), + ID_IDX(ID_AA64ZFR0), + ID_IDX(ID_REG_4_5), + ID_IDX(ID_REG_4_6), + ID_IDX(ID_REG_4_7), + + /* CRm=5 */ + ID_IDX(ID_AA64DFR0), + ID_IDX(ID_AA64DFR1), + ID_IDX(ID_REG_5_2), + ID_IDX(ID_REG_5_3), + ID_IDX(ID_AA64AFR0), + ID_IDX(ID_AA64AFR1), + ID_IDX(ID_REG_5_6), + ID_IDX(ID_REG_5_7), + + /* CRm=6 */ + ID_IDX(ID_AA64ISAR0), + ID_IDX(ID_AA64ISAR1), + ID_IDX(ID_REG_6_2), + ID_IDX(ID_REG_6_3), + ID_IDX(ID_REG_6_4), + ID_IDX(ID_REG_6_5), + ID_IDX(ID_REG_6_6), + ID_IDX(ID_REG_6_7), + + /* CRm=7 */ + ID_IDX(ID_AA64MMFR0), + ID_IDX(ID_AA64MMFR1), + ID_IDX(ID_AA64MMFR2), + ID_IDX(ID_REG_7_3), + ID_IDX(ID_REG_7_4), + ID_IDX(ID_REG_7_5), + ID_IDX(ID_REG_7_6), + ID_IDX(ID_REG_7_7), +}; + +struct id_reg_test_info { + char *name; + uint32_t id; + /* Indicates the register can be set to 0 */ + bool can_clear; + uint64_t initial_value; + uint64_t current_value; + uint64_t (*read_reg)(void); +}; + +#define ID_REG_INFO(name) (&id_reg_list[ID_IDX(name)]) +static struct id_reg_test_info id_reg_list[] = { + /* CRm=1 */ + ID_REG_ENT(ID_PFR0), + ID_REG_ENT(ID_PFR1), + ID_REG_ENT(ID_DFR0), + ID_REG_ENT(ID_AFR0), + ID_REG_ENT(ID_MMFR0), + ID_REG_ENT(ID_MMFR1), + ID_REG_ENT(ID_MMFR2), + ID_REG_ENT(ID_MMFR3), + + /* CRm=2 */ + ID_REG_ENT(ID_ISAR0), + ID_REG_ENT(ID_ISAR1), + ID_REG_ENT(ID_ISAR2), + ID_REG_ENT(ID_ISAR3), + ID_REG_ENT(ID_ISAR4), + ID_REG_ENT(ID_ISAR5), + ID_REG_ENT(ID_MMFR4), + ID_REG_ENT(ID_ISAR6), + + /* CRm=3 */ + ID_REG_ENT(MVFR0), + ID_REG_ENT(MVFR1), + ID_REG_ENT(MVFR2), + ID_REG_ENT(ID_REG_3_3), + ID_REG_ENT(ID_PFR2), + ID_REG_ENT(ID_DFR1), + ID_REG_ENT(ID_MMFR5), + ID_REG_ENT(ID_REG_3_7), + + /* CRm=4 */ + ID_REG_ENT(ID_AA64PFR0), + ID_REG_ENT(ID_AA64PFR1), + ID_REG_ENT(ID_REG_4_2), + ID_REG_ENT(ID_REG_4_3), + ID_REG_ENT(ID_AA64ZFR0), + ID_REG_ENT(ID_REG_4_5), + ID_REG_ENT(ID_REG_4_6), + ID_REG_ENT(ID_REG_4_7), + + /* CRm=5 */ + ID_REG_ENT(ID_AA64DFR0), + ID_REG_ENT(ID_AA64DFR1), + ID_REG_ENT(ID_REG_5_2), + ID_REG_ENT(ID_REG_5_3), + ID_REG_ENT(ID_AA64AFR0), + ID_REG_ENT(ID_AA64AFR1), + ID_REG_ENT(ID_REG_5_6), + ID_REG_ENT(ID_REG_5_7), + + /* CRm=6 */ + ID_REG_ENT(ID_AA64ISAR0), + ID_REG_ENT(ID_AA64ISAR1), + ID_REG_ENT(ID_REG_6_2), + ID_REG_ENT(ID_REG_6_3), + ID_REG_ENT(ID_REG_6_4), + ID_REG_ENT(ID_REG_6_5), + ID_REG_ENT(ID_REG_6_6), + ID_REG_ENT(ID_REG_6_7), + + /* CRm=7 */ + ID_REG_ENT(ID_AA64MMFR0), + ID_REG_ENT(ID_AA64MMFR1), + ID_REG_ENT(ID_AA64MMFR2), + ID_REG_ENT(ID_REG_7_3), + ID_REG_ENT(ID_REG_7_4), + ID_REG_ENT(ID_REG_7_5), + ID_REG_ENT(ID_REG_7_6), + ID_REG_ENT(ID_REG_7_7), +}; + +static bool aarch32_support = true; + +/* Utilities to get a feature field from ID register value */ +static inline int +cpuid_signed_field_width(uint64_t id_val, int field, int width) +{ + return (s64)(id_val << (64 - width - field)) >> (64 - width); +} + +static unsigned int +cpuid_unsigned_field_width(uint64_t id_val, int field, int width) +{ + return (uint64_t)(id_val << (64 - width - field)) >> (64 - width); +} + +static inline int __attribute_const__ +cpuid_extract_field_width(uint64_t id_val, int field, int width, bool sign) +{ + return (sign) ? cpuid_signed_field_width(id_val, field, width) : + cpuid_unsigned_field_width(id_val, field, width); +} + +#define is_id_reg(id) \ + (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && \ + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 0 && \ + sys_reg_CRm(id) < 8) + +#define GET_ID_FIELD(regval, shift, is_signed) \ + cpuid_extract_field_width(regval, shift, 4, is_signed) + +#define GET_ID_UFIELD(regval, shift) \ + cpuid_unsigned_field_width(regval, shift, 4) + +#define UPDATE_ID_UFIELD(regval, shift, fval) \ + (((regval) & ~(0xfULL << (shift))) | \ + (((uint64_t)((fval) & 0xf)) << (shift))) + +void pmu_init(struct kvm_vm *vm, uint32_t vcpu) +{ + struct kvm_device_attr attr = { + .group = KVM_ARM_VCPU_PMU_V3_CTRL, + .attr = KVM_ARM_VCPU_PMU_V3_INIT, + }; + vcpu_ioctl(vm, vcpu, KVM_SET_DEVICE_ATTR, &attr); +} + +void sve_init(struct kvm_vm *vm, uint32_t vcpu) +{ + int feature = KVM_ARM_VCPU_SVE; + + vcpu_ioctl(vm, vcpu, KVM_ARM_VCPU_FINALIZE, &feature); +} + +#define GICD_BASE_GPA 0x8000000ULL +#define GICR_BASE_GPA 0x80A0000ULL + +void test_vgic_init(struct kvm_vm *vm, uint32_t vcpu) +{ + /* We jsut need to configure gic v3 (we don't use it though) */ + vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA); +} + +static bool is_aarch32_id_reg(uint32_t id) +{ + uint32_t crm, op2; + + if (!is_id_reg(id)) + return false; + + crm = sys_reg_CRm(id); + op2 = sys_reg_Op2(id); + if (crm == 1 || crm == 2 || (crm == 3 && (op2 != 3 && op2 != 7))) + /* AArch32 ID register */ + return true; + + return false; +} + +#define MAX_CAPS 2 +struct feature_test_info { + char *name; /* Feature Name (Debug information) */ + + /* ID register that identifies the presence of the feature */ + struct id_reg_test_info *sreg; + + /* + * Bit position of the ID register field that identifies + * the presence of the feature. + */ + int shift; + + /* Min value of the field that indicates the presence of the feature. */ + int min; + bool is_sign; /* Is the field signed or unsigned ? */ + int ncaps; /* Number of valid Capabilities in caps[] */ + + /* KVM_CAP_* Capabilities to indicates that KVM supports this feature */ + long caps[MAX_CAPS]; + + /* struct kvm_enable_cap to use the capability if needed */ + struct kvm_enable_cap *opt_in_cap; + + /* Should the guest check the ID register for this feature ? */ + bool run_test; + + /* + * Extra initialization function to enable the feature if needed. + * (e.g. KVM_ARM_VCPU_FINALIZE for SVE) + */ + void (*init_feature)(struct kvm_vm *vm, uint32_t vcpuid); + + /* struct kvm_vcpu_init to opt-in the feature if needed */ + struct kvm_vcpu_init *vcpu_init; +}; + +/* Information for opt-in CPU features */ +static struct feature_test_info feature_test_info_table[] = { + { + .name = "SVE", + .sreg = ID_REG_INFO(ID_AA64PFR0), + .shift = ID_AA64PFR0_SVE_SHIFT, + .min = 1, + .caps = {KVM_CAP_ARM_SVE}, + .ncaps = 1, + .init_feature = sve_init, + .vcpu_init = &(struct kvm_vcpu_init) { + .features = {1ULL << KVM_ARM_VCPU_SVE}, + }, + }, + { + .name = "GIC", + .sreg = ID_REG_INFO(ID_AA64PFR0), + .shift = ID_AA64PFR0_GIC_SHIFT, + .min = 1, + .caps = {KVM_CAP_IRQCHIP}, + .ncaps = 1, + .init_feature = test_vgic_init, + }, + { + .name = "MTE", + .sreg = ID_REG_INFO(ID_AA64PFR1), + .shift = ID_AA64PFR1_MTE_SHIFT, + .min = 2, + .caps = {KVM_CAP_ARM_MTE}, + .ncaps = 1, + .opt_in_cap = &(struct kvm_enable_cap) { + .cap = KVM_CAP_ARM_MTE, + }, + }, + { + .name = "PMUV3", + .sreg = ID_REG_INFO(ID_AA64DFR0), + .shift = ID_AA64DFR0_PMUVER_SHIFT, + .min = 1, + .init_feature = pmu_init, + .caps = {KVM_CAP_ARM_PMU_V3}, + .ncaps = 1, + .vcpu_init = &(struct kvm_vcpu_init) { + .features = {1ULL << KVM_ARM_VCPU_PMU_V3}, + }, + }, + { + .name = "PERFMON", + .sreg = ID_REG_INFO(ID_DFR0), + .shift = ID_DFR0_PERFMON_SHIFT, + .min = 3, + .init_feature = pmu_init, + .caps = {KVM_CAP_ARM_PMU_V3}, + .ncaps = 1, + .vcpu_init = &(struct kvm_vcpu_init) { + .features = {1ULL << KVM_ARM_VCPU_PMU_V3}, + }, + }, +}; + +static void walk_id_reg_list(void (*fn)(struct id_reg_test_info *r, void *arg), + void *arg) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(id_reg_list); i++) + fn(&id_reg_list[i], arg); +} + +static void guest_code_id_reg_check_one(struct id_reg_test_info *idr, void *arg) +{ + uint64_t v = idr->read_reg(); + + GUEST_ASSERT_2(v == idr->current_value, idr->name, idr->current_value); +} + +static void guest_code_id_reg_check_all(uint32_t cpu) +{ + walk_id_reg_list(guest_code_id_reg_check_one, NULL); + GUEST_DONE(); +} + +static void guest_code_do_nothing(uint32_t cpu) +{ + GUEST_DONE(); +} + +static void guest_code_feature_check(uint32_t cpu) +{ + int i; + struct feature_test_info *finfo; + + for (i = 0; i < ARRAY_SIZE(feature_test_info_table); i++) { + finfo = &feature_test_info_table[i]; + if (finfo->run_test) + guest_code_id_reg_check_one(finfo->sreg, NULL); + } + + GUEST_DONE(); +} + +static void guest_code_ptrauth_check(uint32_t cpuid) +{ + struct id_reg_test_info *sreg = ID_REG_INFO(ID_AA64ISAR1); + uint64_t val = sreg->read_reg(); + + GUEST_ASSERT_2(val == sreg->current_value, "PTRAUTH", val); + GUEST_DONE(); +} + +static void reset_id_reg_info_current_value(struct id_reg_test_info *info, + void *arg) +{ + info->current_value = info->initial_value; +} + +/* Reset current_value field of each id_reg_test_info */ +static void reset_id_reg_info(void) +{ + walk_id_reg_list(reset_id_reg_info_current_value, NULL); +} + +static struct kvm_vm *test_vm_create(uint32_t nvcpus, + void (*guest_code)(uint32_t), struct kvm_vcpu_init *init, + struct kvm_enable_cap *cap) +{ + struct kvm_vm *vm; + uint32_t cpuid; + uint64_t mem_pages; + + mem_pages = DEFAULT_GUEST_PHY_PAGES + DEFAULT_STACK_PGS * nvcpus; + mem_pages += mem_pages / (PTES_PER_MIN_PAGE * 2); + mem_pages = vm_adjust_num_guest_pages(VM_MODE_DEFAULT, mem_pages); + + vm = vm_create(VM_MODE_DEFAULT, mem_pages, O_RDWR); + if (cap) + vm_enable_cap(vm, cap); + + kvm_vm_elf_load(vm, program_invocation_name); + + if (init && init->target == -1) { + struct kvm_vcpu_init preferred; + + vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &preferred); + init->target = preferred.target; + } + + vm_init_descriptor_tables(vm); + for (cpuid = 0; cpuid < nvcpus; cpuid++) { + aarch64_vcpu_add_default(vm, cpuid, init, guest_code); + vcpu_init_descriptor_tables(vm, cpuid); + } + + ucall_init(vm, NULL); + return vm; +} + +static void test_vm_free(struct kvm_vm *vm) +{ + ucall_uninit(vm); + kvm_vm_free(vm); +} + +#define TEST_RUN(vm, cpu) \ + (test_vcpu_run(__func__, __LINE__, vm, cpu, true)) + +#define TEST_RUN_NO_SYNC_DATA(vm, cpu) \ + (test_vcpu_run(__func__, __LINE__, vm, cpu, false)) + +static int test_vcpu_run(const char *test_name, int line, + struct kvm_vm *vm, uint32_t vcpuid, bool sync_data) +{ + struct ucall uc; + int ret; + + if (sync_data) { + sync_global_to_guest(vm, id_reg_list); + sync_global_to_guest(vm, feature_test_info_table); + } + + vcpu_args_set(vm, vcpuid, 1, vcpuid); + + ret = _vcpu_run(vm, vcpuid); + if (ret) { + ret = errno; + goto sync_exit; + } + + switch (get_ucall(vm, vcpuid, &uc)) { + case UCALL_SYNC: + case UCALL_DONE: + ret = 0; + break; + case UCALL_ABORT: + TEST_FAIL( + "%s (%s) at line %d (user %s at line %d), args[3]=0x%lx", + (char *)uc.args[0], (char *)uc.args[2], (int)uc.args[1], + test_name, line, uc.args[3]); + break; + default: + TEST_FAIL("Unexpected guest exit\n"); + } + +sync_exit: + if (sync_data) { + sync_global_from_guest(vm, id_reg_list); + sync_global_from_guest(vm, feature_test_info_table); + } + return ret; +} + +struct vm_vcpu_arg { + struct kvm_vm *vm; + uint32_t vcpuid; + bool after_run; +}; + +/* + * Test if KVM_SET_ONE_REG can work with the value KVM_GET_ONE_REG returns, + * KVM_SET_ONE_REG with zero works before KVM_RUN (and fails after KVM_RUN), + * and KVM_GET_ONE_REG returns the value KVM_SET_ONE_REG sets. + */ +static void test_get_set_id_reg(struct id_reg_test_info *sreg, void *arg) +{ + struct kvm_vm *vm = ((struct vm_vcpu_arg *)arg)->vm; + uint32_t vcpuid = ((struct vm_vcpu_arg *)arg)->vcpuid; + bool after_run = ((struct vm_vcpu_arg *)arg)->after_run; + struct kvm_one_reg one_reg; + uint64_t reg_val, tval; + int ret; + + one_reg.addr = (uint64_t)®_val; + one_reg.id = KVM_ARM64_SYS_REG(sreg->id); + + /* Check the current register value */ + vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg); + TEST_ASSERT(reg_val == sreg->current_value, + "GET(%s) didn't return 0x%lx but 0x%lx", + sreg->name, sreg->current_value, reg_val); + tval = reg_val; + + /* Try to clear the register that should be able to be cleared. */ + if ((reg_val != 0) && (sreg->can_clear)) { + reg_val = 0; + ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg); + if (after_run) { + /* Expect an error after KVM_RUN */ + TEST_ASSERT(ret, + "Clearing %s unexpectedly worked\n", + sreg->name); + } else { + TEST_ASSERT(!ret, + "Clearing %s didn't work\n", sreg->name); + /* + * Make sure that KVM_GET_ONE_REG provides the value + * we set. + */ + vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg); + TEST_ASSERT(reg_val == 0, + "GET(%s) didn't return 0x%lx but 0x%lx", + sreg->name, (uint64_t)0, reg_val); + } + } + + /* Check if KVM_SET_ONE_REG works with the original value. */ + reg_val = tval; + ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg); + TEST_ASSERT(ret == 0, "Setting the same ID reg value should work\n"); + + /* Make sure that KVM_GET_ONE_REG provides the value we set. */ + vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg); + TEST_ASSERT(reg_val == tval, + "GET(%s) didn't return 0x%lx but 0x%lx", + sreg->name, sreg->current_value, reg_val); +} + +/* + * Test if KVM_SET_ONE_REG with the current value works before KVM_RUN, + * values of ID registers the guest sees are consistent with the ones + * userspace sees, and KVM_SET_ONE_REG after KVM_RUN works when the + * specified value is the same as the current one (fails otherwise). + */ +static void test_id_regs_basic(void) +{ + struct kvm_vm *vm; + struct vm_vcpu_arg arg = { .vcpuid = 0 }; + int ret; + + reset_id_reg_info(); + + vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL); + + arg.vm = vm; + walk_id_reg_list(test_get_set_id_reg, &arg); + + ret = TEST_RUN(vm, 0); + assert(!ret); + + arg.after_run = true; + walk_id_reg_list(test_get_set_id_reg, &arg); + + test_vm_free(vm); +} + +static bool caps_are_supported(long *caps, int ncaps) +{ + int i; + + for (i = 0; i < ncaps; i++) { + if (kvm_check_cap(caps[i]) <= 0) + return false; + } + return true; +} + +#define NCAPS_PTRAUTH 2 + +/* + * Test if the ID register value reflects the ptrauth feature configuration. + * KVM_SET_ONE_REG should work as long as the requested value is consistent + * with the ptrauth feature configuration. + */ +static void test_feature_ptrauth(void) +{ + struct kvm_one_reg one_reg; + struct kvm_vcpu_init init; + struct kvm_vm *vm = NULL; + struct id_reg_test_info *sreg = ID_REG_INFO(ID_AA64ISAR1); + uint32_t vcpu = 0; + int64_t rval; + int ret; + int apa, api, gpa, gpi; + char *name = "PTRAUTH"; + long caps[NCAPS_PTRAUTH] = {KVM_CAP_ARM_PTRAUTH_ADDRESS, + KVM_CAP_ARM_PTRAUTH_GENERIC}; + + reset_id_reg_info(); + one_reg.addr = (uint64_t)&rval; + one_reg.id = KVM_ARM64_SYS_REG(sreg->id); + + if (caps_are_supported(caps, NCAPS_PTRAUTH)) { + + /* Test with feature enabled */ + memset(&init, 0, sizeof(init)); + init.target = -1; + init.features[0] = (1ULL << KVM_ARM_VCPU_PTRAUTH_ADDRESS | + 1ULL << KVM_ARM_VCPU_PTRAUTH_GENERIC); + vm = test_vm_create(1, guest_code_ptrauth_check, &init, NULL); + vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg); + + /* Make sure values of apa/api/gpa/gpi fields are expected */ + apa = GET_ID_UFIELD(rval, ID_AA64ISAR1_APA_SHIFT); + api = GET_ID_UFIELD(rval, ID_AA64ISAR1_API_SHIFT); + gpa = GET_ID_UFIELD(rval, ID_AA64ISAR1_GPA_SHIFT); + gpi = GET_ID_UFIELD(rval, ID_AA64ISAR1_GPI_SHIFT); + + TEST_ASSERT((apa > 0) || (api > 0), + "Either apa(0x%x) or api(0x%x) must be available", + apa, gpa); + TEST_ASSERT((gpa > 0) || (gpi > 0), + "Either gpa(0x%x) or gpi(0x%x) must be available", + gpa, gpi); + + TEST_ASSERT((apa > 0) ^ (api > 0), + "Both apa(0x%x) and api(0x%x) must not be available", + apa, api); + TEST_ASSERT((gpa > 0) ^ (gpi > 0), + "Both gpa(0x%x) and gpi(0x%x) must not be available", + gpa, gpi); + + sreg->current_value = rval; + + pr_debug("%s: Test with %s enabled (%s: 0x%lx)\n", + __func__, name, sreg->name, sreg->current_value); + + /* Make sure that the guest sees the same ID register value. */ + ret = TEST_RUN(vm, vcpu); + + TEST_ASSERT(!ret, "%s:KVM_RUN failed with %s enabled", + __func__, name); + test_vm_free(vm); + } + + reset_id_reg_info(); + + /* Test with feature disabled */ + vm = test_vm_create(1, guest_code_feature_check, NULL, NULL); + vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg); + + apa = GET_ID_UFIELD(rval, ID_AA64ISAR1_APA_SHIFT); + api = GET_ID_UFIELD(rval, ID_AA64ISAR1_API_SHIFT); + gpa = GET_ID_UFIELD(rval, ID_AA64ISAR1_GPA_SHIFT); + gpi = GET_ID_UFIELD(rval, ID_AA64ISAR1_GPI_SHIFT); + TEST_ASSERT(!apa && !api && !gpa && !gpi, + "apa(0x%x), api(0x%x), gpa(0x%x), gpi(0x%x) must be zero", + apa, api, gpa, gpi); + + pr_debug("%s: Test with %s disabled (%s: 0x%lx)\n", + __func__, name, sreg->name, sreg->current_value); + + /* Make sure that the guest sees the same ID register value. */ + ret = TEST_RUN(vm, vcpu); + TEST_ASSERT(!ret, "%s TEST_RUN failed with %s enabled, ret=0x%x", + __func__, name, ret); + + test_vm_free(vm); +} + +static bool feature_caps_are_available(struct feature_test_info *finfo) +{ + return ((finfo->ncaps > 0) && + caps_are_supported(finfo->caps, finfo->ncaps)); +} + +/* + * Test if the ID register value reflects the feature configuration. + * KVM_SET_ONE_REG should work as long as the requested value is + * consistent with the feature configuration. + */ +static void test_feature(struct feature_test_info *finfo) +{ + struct id_reg_test_info *sreg = finfo->sreg; + struct kvm_one_reg one_reg; + struct kvm_vcpu_init init, *initp = NULL; + struct kvm_vm *vm = NULL; + int64_t fval, reg_val; + uint32_t vcpu = 0; + bool is_sign = finfo->is_sign; + int min = finfo->min; + int shift = finfo->shift; + int ret; + + pr_debug("%s: %s (reg %s)\n", __func__, finfo->name, sreg->name); + + reset_id_reg_info(); + + if (is_aarch32_id_reg(sreg->id) && !aarch32_support) + /* + * AArch32 is not supported. Skip testing with the AArch32 + * ID register. + */ + return; + + /* Indicate that guest runs the test for the feature */ + finfo->run_test = 1; + one_reg.addr = (uint64_t)®_val; + one_reg.id = KVM_ARM64_SYS_REG(sreg->id); + + /* + * Test with feature enabled if the feature is exposed in the default + * ID register value or the capabilities are supported at KVM level. + */ + if ((GET_ID_FIELD(sreg->initial_value, shift, is_sign) >= min) || + feature_caps_are_available(finfo)) { + if (finfo->vcpu_init) { + /* Need to enable the feature via KVM_ARM_VCPU_INIT. */ + memset(&init, 0, sizeof(init)); + init = *finfo->vcpu_init; + init.target = -1; + initp = &init; + } + + vm = test_vm_create(1, guest_code_feature_check, initp, + finfo->opt_in_cap); + if (finfo->init_feature) + /* Run any required extra process to use the feature */ + finfo->init_feature(vm, vcpu); + + /* Check if the ID register value indicates the feature */ + vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg); + fval = GET_ID_FIELD(reg_val, shift, is_sign); + TEST_ASSERT(fval >= min, "%s field of %s is too small (%ld)", + finfo->name, sreg->name, fval); + sreg->current_value = reg_val; + + pr_debug("%s: Test with %s enabled (%s: 0x%lx)\n", __func__, + finfo->name, sreg->name, sreg->current_value); + + /* Make sure that the guest sees the same ID register value. */ + ret = TEST_RUN(vm, vcpu); + TEST_ASSERT(!ret, "%s:TEST_RUN failed with %s enabled", + __func__, finfo->name); + + test_vm_free(vm); + } + + reset_id_reg_info(); + + /* Test with feature disabled */ + vm = test_vm_create(1, guest_code_feature_check, NULL, NULL); + vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg); + fval = GET_ID_FIELD(reg_val, shift, is_sign); + if (finfo->vcpu_init || finfo->opt_in_cap) { + /* + * If the feature needs to be enabled with KVM_ARM_VCPU_INIT + * or opt-in capabilities, the default value of the ID register + * shouldn't indicate the feature. + */ + TEST_ASSERT(fval < min, "%s field of %s is too big (%ld)", + finfo->name, sreg->name, fval); + } else { + /* Update the relevant field to hide the feature. */ + fval = is_sign ? 0xf : 0x0; + reg_val = UPDATE_ID_UFIELD(reg_val, shift, fval); + ret = _vcpu_ioctl(vm, vcpu, KVM_SET_ONE_REG, &one_reg); + TEST_ASSERT(ret == 0, "Disabling %s failed %d\n", + finfo->name, ret); + sreg->current_value = reg_val; + } + + pr_debug("%s: Test with %s disabled (%s: 0x%lx)\n", + __func__, finfo->name, sreg->name, sreg->current_value); + + /* Make sure that the guest sees the same ID register value. */ + ret = TEST_RUN(vm, vcpu); + finfo->run_test = 0; + test_vm_free(vm); +} + +/* + * For each opt-in feature in feature_test_info_table[], + * test if KVM_GET_ONE_REG/KVM_SET_ONE_REG works appropriately according + * to the feature configuration. See test_feature's comment for more detail. + */ +static void test_feature_all(void) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(feature_test_info_table); i++) + test_feature(&feature_test_info_table[i]); +} + +int set_id_reg(struct kvm_vm *vm, uint32_t vcpu, struct id_reg_test_info *sreg, + uint64_t new_val) +{ + int ret; + uint64_t reg_val; + struct kvm_one_reg one_reg; + + one_reg.id = KVM_ARM64_SYS_REG(sreg->id); + one_reg.addr = (uint64_t)®_val; + + reg_val = new_val; + ret = _vcpu_ioctl(vm, vcpu, KVM_SET_ONE_REG, &one_reg); + if (!ret) + sreg->current_value = new_val; + + return ret; +} + + +/* + * Create a new VM with one vCPU, set the ID register to @new_val. + */ +int set_id_reg_vm(struct id_reg_test_info *sreg, uint64_t new_val) +{ + struct kvm_vm *vm; + int ret; + uint32_t vcpu = 0; + + reset_id_reg_info(); + + vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL); + ret = set_id_reg(vm, vcpu, sreg, new_val); + test_vm_free(vm); + + return ret; +} + +struct frac_info { + char *name; + struct id_reg_test_info *sreg; + struct id_reg_test_info *frac_sreg; + int shift; + int frac_shift; +}; + +struct frac_info frac_info_table[] = { + { + .name = "RAS", + .sreg = ID_REG_INFO(ID_AA64PFR0), + .shift = ID_AA64PFR0_RAS_SHIFT, + .frac_sreg = ID_REG_INFO(ID_AA64PFR1), + .frac_shift = ID_AA64PFR1_RASFRAC_SHIFT, + }, + { + .name = "MPAM", + .sreg = ID_REG_INFO(ID_AA64PFR0), + .shift = ID_AA64PFR0_MPAM_SHIFT, + .frac_sreg = ID_REG_INFO(ID_AA64PFR1), + .frac_shift = ID_AA64PFR1_MPAMFRAC_SHIFT, + }, + { + .name = "CSV2", + .sreg = ID_REG_INFO(ID_AA64PFR0), + .shift = ID_AA64PFR0_CSV2_SHIFT, + .frac_sreg = ID_REG_INFO(ID_AA64PFR1), + .frac_shift = ID_AA64PFR1_CSV2FRAC_SHIFT, + }, +}; + + +/* + * Make sure that we can set the fractional reg field even before setting + * the feature reg field. + */ +int test_feature_frac_vm(struct frac_info *frac, uint64_t new_val, + uint64_t frac_new_val) +{ + struct kvm_vm *vm; + uint32_t vcpu = 0; + struct id_reg_test_info *sreg, *frac_sreg; + int ret; + + sreg = frac->sreg; + frac_sreg = frac->frac_sreg; + reset_id_reg_info(); + + reset_id_reg_info(); + + vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL); + + /* Set fractional reg field */ + ret = set_id_reg(vm, vcpu, frac_sreg, frac_new_val); + TEST_ASSERT(!ret, "SET_REG(%s=0x%lx) failed, ret=0x%x", + frac_sreg->name, frac_new_val, ret); + + /* Set feature reg field */ + ret = set_id_reg(vm, vcpu, sreg, new_val); + TEST_ASSERT(!ret, "SET_REG(%s=0x%lx) failed, ret=0x%x", + sreg->name, new_val, ret); + + ret = TEST_RUN(vm, vcpu); + test_vm_free(vm); + + return ret; +} + +/* + * Test for setting the feature fractional field of the ID register. + * When the (main) feature field of the ID register is the same as the host's, + * the fractional field value cannot be larger than the host's. + * (KVM_SET_ONE_REG should work but KVM_RUN with the larger value will fail) + * When the (main) feature field of the ID register is smaler than the host's, + * the fractional field can be any values. + * The function tests those behaviors. + */ +void test_feature_frac_one(struct frac_info *frac) +{ + uint64_t ftr_val, ftr_fval, frac_val, frac_fval; + int ret, shift, frac_shift; + struct id_reg_test_info *sreg, *frac_sreg; + + reset_id_reg_info(); + + sreg = frac->sreg; + shift = frac->shift; + frac_sreg = frac->frac_sreg; + frac_shift = frac->frac_shift; + + pr_debug("%s(%s Frac) reg:%s(shift:%d) frac reg:%s(shift:%d)\n", + __func__, frac->name, sreg->name, shift, frac_sreg->name, + frac_shift); + + /* + * Use the host's feature value for the guest. + * KVM_RUN with a larger frac value than the host's should fail. + * Otherwise, it should work. + */ + + frac_fval = GET_ID_UFIELD(frac_sreg->initial_value, frac_shift); + if (frac_fval > 0) { + /* Test with smaller frac value */ + frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value, + frac_shift, frac_fval - 1); + ret = test_feature_frac_vm(frac, sreg->initial_value, frac_val); + TEST_ASSERT(!ret, "Test smaller %s frac (val:%lx) failed(%d)", + frac->name, frac_val, ret); + } + + reset_id_reg_info(); + + if (frac_fval != 0xf) { + /* Test with larger frac value */ + frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value, + frac_shift, frac_fval + 1); + + /* Setting larger frac shouldn't fail at ioctl */ + ret = set_id_reg_vm(frac_sreg, frac_val); + TEST_ASSERT(!ret, + "SET larger %s frac (%s org:%lx, val:%lx) failed(%d)", + frac->name, frac_sreg->name, frac_sreg->initial_value, + frac_val, ret); + + /* KVM_RUN with larger frac should fail */ + ret = test_feature_frac_vm(frac, sreg->initial_value, frac_val); + TEST_ASSERT(ret, + "Test with larger %s frac (%s org:%lx, val:%lx) worked", + frac->name, frac_sreg->name, frac_sreg->initial_value, + frac_val); + } + + reset_id_reg_info(); + + /* + * Test with a smaller (main) feature value than the host's. + */ + ftr_fval = GET_ID_UFIELD(sreg->initial_value, shift); + if (ftr_fval == 0) + /* Cannot set it to the smaller value */ + return; + + ftr_val = UPDATE_ID_UFIELD(sreg->initial_value, shift, ftr_fval - 1); + ret = test_feature_frac_vm(frac, ftr_val, frac_sreg->initial_value); + TEST_ASSERT(!ret, "Test with smaller %s (val:%lx) failed(%d)", + frac->name, ftr_val, ret); + + if (frac_fval > 0) { + /* Test with smaller frac value */ + frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value, + frac_shift, frac_fval - 1); + ret = test_feature_frac_vm(frac, ftr_val, frac_val); + TEST_ASSERT(!ret, + "Test with smaller %s and frac (val:%lx) failed(%d)", + frac->name, ftr_val, ret); + } + + if (frac_fval != 0xf) { + /* Test with larger frac value */ + frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value, + frac_shift, frac_fval + 1); + ret = test_feature_frac_vm(frac, ftr_val, frac_val); + TEST_ASSERT(!ret, + "Test with smaller %s and larger frac (val:%lx) failed(%d)", + frac->name, ftr_val, ret); + } +} + +/* + * Test for setting feature fractional fields of ID registers. + * See test_feature_frac_one's comments for more detail. + */ +void test_feature_frac_all(void) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(frac_info_table); i++) + test_feature_frac_one(&frac_info_table[i]); +} + +void run_test(void) +{ + test_id_regs_basic(); + test_feature_all(); + test_feature_ptrauth(); + test_feature_frac_all(); +} + +static void init_id_reg_info_one(struct id_reg_test_info *sreg, void *arg) +{ + struct kvm_one_reg one_reg; + uint64_t reg_val; + struct kvm_vm *vm = ((struct vm_vcpu_arg *)arg)->vm; + uint32_t vcpuid = ((struct vm_vcpu_arg *)arg)->vcpuid; + int ret; + + one_reg.addr = (uint64_t)®_val; + one_reg.id = KVM_ARM64_SYS_REG(sreg->id); + vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg); + sreg->current_value = reg_val; + + /* Keep the initial value to reset the register value later */ + sreg->initial_value = reg_val; + + /* Check if the register can be set to 0 */ + reg_val = 0; + ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg); + if (!ret) + sreg->can_clear = true; + + pr_debug("%s (0x%x): 0x%lx%s\n", sreg->name, sreg->id, + sreg->initial_value, sreg->can_clear ? ", can clear" : ""); +} + +/* + * Check if aarch32 is supported, and initialize id_reg_test_info for all + * the ID registers. Loop over the idreg list and populates each id_reg + * info with the initial value, current value, and can_clear value. + */ +static void init_test_info(void) +{ + uint64_t reg_val; + int fval; + struct kvm_vm *vm; + struct kvm_one_reg one_reg; + struct vm_vcpu_arg arg = { .vcpuid = 0 }; + + vm = test_vm_create(1, guest_code_do_nothing, NULL, NULL); + + /* Get ID_AA64PFR0_EL1 to check if AArch32 is supported */ + one_reg.addr = (uint64_t)®_val; + one_reg.id = KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1); + vcpu_ioctl(vm, 0, KVM_GET_ONE_REG, &one_reg); + fval = GET_ID_UFIELD(reg_val, ID_AA64PFR0_EL0_SHIFT); + if (fval == 0x1) + /* No AArch32 support */ + aarch32_support = false; + + /* Initialize id_reg_test_info */ + arg.vm = vm; + walk_id_reg_list(init_id_reg_info_one, &arg); + test_vm_free(vm); +} + +int main(void) +{ + + setbuf(stdout, NULL); + + if (kvm_check_cap(KVM_CAP_ARM_ID_REG_CONFIGURABLE) <= 0) { + print_skip("KVM_CAP_ARM_ID_REG_CONFIGURABLE is not supported"); + exit(KSFT_SKIP); + } + + init_test_info(); + run_test(); + return 0; +}