From patchwork Thu Dec 1 10:49:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akihiko Odaki X-Patchwork-Id: 13061207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 66873C43217 for ; Thu, 1 Dec 2022 10:51:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mFb/8a9jhUvTtNvJwoIiBmmGWr5MSA9X2iXqoCdqciE=; b=LhMmXb9Epa7BUl gT6EpIedmGwMuKovS28Bk1zQrtdihgeE9gct0GEkA3/yFCBGzyfUCNNIMsOjq8SW8caxl96y4J91w dlQvD5SSfZacrAAd8G5Yn4AGuyFhVdHcLqck00geRToQijnG11vGf8D4J+kuogF0d381gNFy7wgOz 9biQbu2HiUcpfkMvQ9jM/geojdPCz4kYbXJ6xsjVMrpO8RV2rAQT29GQ4QHI6JCWcyx+FHk1d4LCi mZgSXYgDbhAQu3TH3MtDW2nv2tODiPXkvBYh8UCRVQkz/WRkkXFntPkYRskkEzONN1uZ0mQM9tSPD ryXjx3bD5dEC5VPEz7eA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0h8j-006qSg-RV; Thu, 01 Dec 2022 10:50:01 +0000 Received: from mail-pl1-x62e.google.com ([2607:f8b0:4864:20::62e]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1p0h8b-006qNt-Db for linux-arm-kernel@lists.infradead.org; Thu, 01 Dec 2022 10:49:55 +0000 Received: by mail-pl1-x62e.google.com with SMTP id y17so1282378plp.3 for ; Thu, 01 Dec 2022 02:49:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=daynix-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9d6GIsRnHRwJeo4c8vwktBHVtkrfvERt/xtyaBPDy4o=; b=UF7drim7moVNa+AO0IAYUNeTBgBFvWm60+kHGmRNakQh4mdoH4iKggMg+4EuZZ3MXB IT1R1Ry/USDLKLT9/zJlCeUXXIfUngZFtUHVN+I4fjNpX4kQh0hfFESY+fgXXpr1TQE7 ymN49ybc4SPnqqpomMNQe4FB6r84YZphIB6HwgMx8a7LtOnhQZnRJQHD8vPEpswTT91J 7tJ0YRJwil8e/laIxfg1mJzooYKONCPnhl9QWxKHixN10B0cqzp0zcNwS9kOBOM+7fUN 8G2jD408lr4K+wH5xpIub+G6cQ2K/qp8zKvZips9UrraQ0mE9zP237bys0iNHMbGAAHT 4paA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9d6GIsRnHRwJeo4c8vwktBHVtkrfvERt/xtyaBPDy4o=; b=y5whQ+X8fFWn3O4bFq9fAAav1UlPOeako4SWz56XITNM0XsMGQKJu9cBDzuaHIEe49 l+8LOzQ9nxvHa0nIcNdVL548yW6tQbF5QmGPtYh9IUp7yOcVqcijMQ8OCkJxJmjCf2UL KLomiI654z6dH8aENgVY1dLRnjnTgReVDZHW6uOfoqviBc6lT8bvsJDI9HkkeDx8iANh 3UPCimfknZMSqP6betNxd3jlhbxeUvAtidrA2NAIJzxzu4zMSKtW8i7J4MNbMrWMu+25 eNj4CzcLk05/CrEBE9UvrQzbs1Z/8Wsm0J5g3OnezcIoOLYHKqKNZkx9MybKBSyhUO8N UmhA== X-Gm-Message-State: ANoB5pmf2ADGfDJRgFHJ7K0OKIyfkZYkBGmUXHiBr6RvT075K5nxuX5u 1pKbiEBAb3z6Bcg5s35ixzNLMg== X-Google-Smtp-Source: AA0mqf5I7QVq0C0CSAaOPDG0zKpfuEuQQ+mRZCxYhiDAqNchwNBAU9nCqkYFzRdnVyrzfG7tR327QQ== X-Received: by 2002:a17:902:ab89:b0:185:3659:1ce9 with SMTP id f9-20020a170902ab8900b0018536591ce9mr44908324plr.26.1669891790333; Thu, 01 Dec 2022 02:49:50 -0800 (PST) Received: from fedora.flets-east.jp ([2400:4050:c360:8200:8ae8:3c4:c0da:7419]) by smtp.gmail.com with ESMTPSA id 4-20020a630804000000b004785a63b44bsm2320580pgi.43.2022.12.01.02.49.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Dec 2022 02:49:50 -0800 (PST) From: Akihiko Odaki To: Cc: linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Mathieu Poirier , Oliver Upton , Suzuki K Poulose , Alexandru Elisei , James Morse , Marc Zyngier , Will Deacon , Catalin Marinas , asahi@lists.linux.dev, Alyssa Rosenzweig , Sven Peter , Hector Martin , Akihiko Odaki Subject: [PATCH 1/3] KVM: arm64: Make CCSIDRs consistent Date: Thu, 1 Dec 2022 19:49:12 +0900 Message-Id: <20221201104914.28944-2-akihiko.odaki@daynix.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221201104914.28944-1-akihiko.odaki@daynix.com> References: <20221201104914.28944-1-akihiko.odaki@daynix.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20221201_024953_495536_3027B008 X-CRM114-Status: GOOD ( 16.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org A vCPU sees masked CCSIDRs when the physical CPUs has mismatched cache types or the vCPU has 32-bit EL1. Perform the same masking for ioctls too so that ioctls shows values consistent with the values the vCPU actually sees. Signed-off-by: Akihiko Odaki --- arch/arm64/include/asm/kvm_emulate.h | 9 ++++-- arch/arm64/kvm/sys_regs.c | 45 ++++++++++++++-------------- 2 files changed, 30 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 9bdba47f7e14..b45cf8903190 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -61,6 +61,12 @@ static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) } #endif +static inline bool vcpu_cache_overridden(struct kvm_vcpu *vcpu) +{ + return cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) || + vcpu_el1_is_32bit(vcpu); +} + static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) { vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS; @@ -88,8 +94,7 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) if (vcpu_el1_is_32bit(vcpu)) vcpu->arch.hcr_el2 &= ~HCR_RW; - if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) || - vcpu_el1_is_32bit(vcpu)) + if (vcpu_cache_overridden(vcpu)) vcpu->arch.hcr_el2 |= HCR_TID2; if (kvm_has_mte(vcpu->kvm)) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index f4a7c5abcbca..273ed1aaa6b3 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -88,7 +88,7 @@ static u32 cache_levels; #define CSSELR_MAX 14 /* Which cache CCSIDR represents depends on CSSELR value. */ -static u32 get_ccsidr(u32 csselr) +static u32 get_ccsidr(struct kvm_vcpu *vcpu, u32 csselr) { u32 ccsidr; @@ -99,6 +99,21 @@ static u32 get_ccsidr(u32 csselr) ccsidr = read_sysreg(ccsidr_el1); local_irq_enable(); + /* + * Guests should not be doing cache operations by set/way at all, and + * for this reason, we trap them and attempt to infer the intent, so + * that we can flush the entire guest's address space at the appropriate + * time. + * To prevent this trapping from causing performance problems, let's + * expose the geometry of all data and unified caches (which are + * guaranteed to be PIPT and thus non-aliasing) as 1 set and 1 way. + * [If guests should attempt to infer aliasing properties from the + * geometry (which is not permitted by the architecture), they would + * only do so for virtually indexed caches.] + */ + if (vcpu_cache_overridden(vcpu) && !(csselr & 1)) // data or unified cache + ccsidr &= ~GENMASK(27, 3); + return ccsidr; } @@ -1300,22 +1315,8 @@ static bool access_ccsidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, return write_to_read_only(vcpu, p, r); csselr = vcpu_read_sys_reg(vcpu, CSSELR_EL1); - p->regval = get_ccsidr(csselr); + p->regval = get_ccsidr(vcpu, csselr); - /* - * Guests should not be doing cache operations by set/way at all, and - * for this reason, we trap them and attempt to infer the intent, so - * that we can flush the entire guest's address space at the appropriate - * time. - * To prevent this trapping from causing performance problems, let's - * expose the geometry of all data and unified caches (which are - * guaranteed to be PIPT and thus non-aliasing) as 1 set and 1 way. - * [If guests should attempt to infer aliasing properties from the - * geometry (which is not permitted by the architecture), they would - * only do so for virtually indexed caches.] - */ - if (!(csselr & 1)) // data or unified cache - p->regval &= ~GENMASK(27, 3); return true; } @@ -2686,7 +2687,7 @@ static bool is_valid_cache(u32 val) } } -static int demux_c15_get(u64 id, void __user *uaddr) +static int demux_c15_get(struct kvm_vcpu *vcpu, u64 id, void __user *uaddr) { u32 val; u32 __user *uval = uaddr; @@ -2705,13 +2706,13 @@ static int demux_c15_get(u64 id, void __user *uaddr) if (!is_valid_cache(val)) return -ENOENT; - return put_user(get_ccsidr(val), uval); + return put_user(get_ccsidr(vcpu, val), uval); default: return -ENOENT; } } -static int demux_c15_set(u64 id, void __user *uaddr) +static int demux_c15_set(struct kvm_vcpu *vcpu, u64 id, void __user *uaddr) { u32 val, newval; u32 __user *uval = uaddr; @@ -2734,7 +2735,7 @@ static int demux_c15_set(u64 id, void __user *uaddr) return -EFAULT; /* This is also invariant: you can't change it. */ - if (newval != get_ccsidr(val)) + if (newval != get_ccsidr(vcpu, val)) return -EINVAL; return 0; default: @@ -2773,7 +2774,7 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg int err; if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX) - return demux_c15_get(reg->id, uaddr); + return demux_c15_get(vcpu, reg->id, uaddr); err = get_invariant_sys_reg(reg->id, uaddr); if (err != -ENOENT) @@ -2817,7 +2818,7 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg int err; if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_DEMUX) - return demux_c15_set(reg->id, uaddr); + return demux_c15_set(vcpu, reg->id, uaddr); err = set_invariant_sys_reg(reg->id, uaddr); if (err != -ENOENT)